00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 211 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3712 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.214 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.214 The recommended git tool is: git 00:00:00.214 using credential 00000000-0000-0000-0000-000000000002 00:00:00.215 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.252 Fetching changes from the remote Git repository 00:00:00.254 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.295 Using shallow fetch with depth 1 00:00:00.295 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.295 > git --version # timeout=10 00:00:00.328 > git --version # 'git version 2.39.2' 00:00:00.328 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.345 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.345 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.467 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.481 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.494 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.494 > git config core.sparsecheckout # timeout=10 00:00:06.504 > git read-tree -mu HEAD # timeout=10 00:00:06.520 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.536 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.537 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.634 [Pipeline] Start of Pipeline 00:00:06.647 [Pipeline] library 00:00:06.649 Loading library shm_lib@master 00:00:06.649 Library shm_lib@master is cached. Copying from home. 00:00:06.700 [Pipeline] node 00:00:06.724 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.725 [Pipeline] { 00:00:06.737 [Pipeline] catchError 00:00:06.738 [Pipeline] { 00:00:06.753 [Pipeline] wrap 00:00:06.763 [Pipeline] { 00:00:06.772 [Pipeline] stage 00:00:06.774 [Pipeline] { (Prologue) 00:00:06.791 [Pipeline] echo 00:00:06.792 Node: VM-host-SM9 00:00:06.797 [Pipeline] cleanWs 00:00:06.806 [WS-CLEANUP] Deleting project workspace... 00:00:06.806 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.813 [WS-CLEANUP] done 00:00:07.047 [Pipeline] setCustomBuildProperty 00:00:07.131 [Pipeline] httpRequest 00:00:08.126 [Pipeline] echo 00:00:08.128 Sorcerer 10.211.164.112 is alive 00:00:08.136 [Pipeline] retry 00:00:08.138 [Pipeline] { 00:00:08.151 [Pipeline] httpRequest 00:00:08.155 HttpMethod: GET 00:00:08.156 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.156 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.170 Response Code: HTTP/1.1 200 OK 00:00:08.171 Success: Status code 200 is in the accepted range: 200,404 00:00:08.171 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.404 [Pipeline] } 00:00:26.420 [Pipeline] // retry 00:00:26.427 [Pipeline] sh 00:00:26.706 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.722 [Pipeline] httpRequest 00:00:27.512 [Pipeline] echo 00:00:27.514 Sorcerer 10.211.164.112 is alive 00:00:27.522 [Pipeline] retry 00:00:27.524 [Pipeline] { 00:00:27.538 [Pipeline] httpRequest 00:00:27.543 HttpMethod: GET 00:00:27.545 URL: http://10.211.164.112/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:27.545 Sending request to url: http://10.211.164.112/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:27.563 Response Code: HTTP/1.1 200 OK 00:00:27.564 Success: Status code 200 is in the accepted range: 200,404 00:00:27.564 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:55.490 [Pipeline] } 00:01:55.506 [Pipeline] // retry 00:01:55.513 [Pipeline] sh 00:01:55.792 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:59.091 [Pipeline] sh 00:01:59.384 + git -C spdk log --oneline -n5 00:01:59.384 b18e1bd62 version: v24.09.1-pre 00:01:59.384 19524ad45 version: v24.09 00:01:59.384 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:59.384 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:59.384 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:59.405 [Pipeline] withCredentials 00:01:59.416 > git --version # timeout=10 00:01:59.430 > git --version # 'git version 2.39.2' 00:01:59.447 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:59.449 [Pipeline] { 00:01:59.458 [Pipeline] retry 00:01:59.461 [Pipeline] { 00:01:59.476 [Pipeline] sh 00:01:59.758 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:59.770 [Pipeline] } 00:01:59.787 [Pipeline] // retry 00:01:59.792 [Pipeline] } 00:01:59.809 [Pipeline] // withCredentials 00:01:59.818 [Pipeline] httpRequest 00:02:00.247 [Pipeline] echo 00:02:00.249 Sorcerer 10.211.164.112 is alive 00:02:00.258 [Pipeline] retry 00:02:00.259 [Pipeline] { 00:02:00.272 [Pipeline] httpRequest 00:02:00.276 HttpMethod: GET 00:02:00.277 URL: http://10.211.164.112/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.277 Sending request to url: http://10.211.164.112/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.280 Response Code: HTTP/1.1 200 OK 00:02:00.280 Success: Status code 200 is in the accepted range: 200,404 00:02:00.280 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:05.183 [Pipeline] } 00:02:05.199 [Pipeline] // retry 00:02:05.207 [Pipeline] sh 00:02:05.487 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:06.875 [Pipeline] sh 00:02:07.155 + git -C dpdk log --oneline -n5 00:02:07.155 eeb0605f11 version: 23.11.0 00:02:07.155 238778122a doc: update release notes for 23.11 00:02:07.155 46aa6b3cfc doc: fix description of RSS features 00:02:07.155 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:07.155 7e421ae345 devtools: support skipping forbid rule check 00:02:07.171 [Pipeline] writeFile 00:02:07.186 [Pipeline] sh 00:02:07.469 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:07.480 [Pipeline] sh 00:02:07.760 + cat autorun-spdk.conf 00:02:07.760 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.760 SPDK_TEST_NVMF=1 00:02:07.760 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.760 SPDK_TEST_URING=1 00:02:07.760 SPDK_TEST_VFIOUSER=1 00:02:07.760 SPDK_TEST_USDT=1 00:02:07.760 SPDK_RUN_UBSAN=1 00:02:07.760 NET_TYPE=virt 00:02:07.760 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:07.760 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.760 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.767 RUN_NIGHTLY=1 00:02:07.769 [Pipeline] } 00:02:07.783 [Pipeline] // stage 00:02:07.797 [Pipeline] stage 00:02:07.799 [Pipeline] { (Run VM) 00:02:07.811 [Pipeline] sh 00:02:08.091 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:08.091 + echo 'Start stage prepare_nvme.sh' 00:02:08.091 Start stage prepare_nvme.sh 00:02:08.091 + [[ -n 3 ]] 00:02:08.091 + disk_prefix=ex3 00:02:08.091 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:08.091 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:08.091 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:08.091 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.091 ++ SPDK_TEST_NVMF=1 00:02:08.091 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.091 ++ SPDK_TEST_URING=1 00:02:08.091 ++ SPDK_TEST_VFIOUSER=1 00:02:08.091 ++ SPDK_TEST_USDT=1 00:02:08.091 ++ SPDK_RUN_UBSAN=1 00:02:08.091 ++ NET_TYPE=virt 00:02:08.091 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:08.091 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:08.091 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.091 ++ RUN_NIGHTLY=1 00:02:08.091 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:08.091 + nvme_files=() 00:02:08.091 + declare -A nvme_files 00:02:08.091 + backend_dir=/var/lib/libvirt/images/backends 00:02:08.091 + nvme_files['nvme.img']=5G 00:02:08.091 + nvme_files['nvme-cmb.img']=5G 00:02:08.091 + nvme_files['nvme-multi0.img']=4G 00:02:08.091 + nvme_files['nvme-multi1.img']=4G 00:02:08.091 + nvme_files['nvme-multi2.img']=4G 00:02:08.091 + nvme_files['nvme-openstack.img']=8G 00:02:08.091 + nvme_files['nvme-zns.img']=5G 00:02:08.091 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:08.091 + (( SPDK_TEST_FTL == 1 )) 00:02:08.091 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:08.091 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:08.091 + for nvme in "${!nvme_files[@]}" 00:02:08.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:08.091 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.091 + for nvme in "${!nvme_files[@]}" 00:02:08.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:08.091 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.091 + for nvme in "${!nvme_files[@]}" 00:02:08.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:08.091 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:08.091 + for nvme in "${!nvme_files[@]}" 00:02:08.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:08.091 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.091 + for nvme in "${!nvme_files[@]}" 00:02:08.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:08.091 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.091 + for nvme in "${!nvme_files[@]}" 00:02:08.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:08.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.350 + for nvme in "${!nvme_files[@]}" 00:02:08.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:08.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.350 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:08.350 + echo 'End stage prepare_nvme.sh' 00:02:08.350 End stage prepare_nvme.sh 00:02:08.361 [Pipeline] sh 00:02:08.641 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:08.641 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:08.641 00:02:08.641 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:08.641 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:08.641 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:08.641 HELP=0 00:02:08.641 DRY_RUN=0 00:02:08.641 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:08.641 NVME_DISKS_TYPE=nvme,nvme, 00:02:08.641 NVME_AUTO_CREATE=0 00:02:08.641 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:08.641 NVME_CMB=,, 00:02:08.641 NVME_PMR=,, 00:02:08.641 NVME_ZNS=,, 00:02:08.641 NVME_MS=,, 00:02:08.641 NVME_FDP=,, 00:02:08.641 SPDK_VAGRANT_DISTRO=fedora39 00:02:08.641 SPDK_VAGRANT_VMCPU=10 00:02:08.641 SPDK_VAGRANT_VMRAM=12288 00:02:08.641 SPDK_VAGRANT_PROVIDER=libvirt 00:02:08.641 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:08.641 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:08.641 SPDK_OPENSTACK_NETWORK=0 00:02:08.641 VAGRANT_PACKAGE_BOX=0 00:02:08.641 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:08.641 FORCE_DISTRO=true 00:02:08.641 VAGRANT_BOX_VERSION= 00:02:08.641 EXTRA_VAGRANTFILES= 00:02:08.641 NIC_MODEL=e1000 00:02:08.641 00:02:08.641 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:08.641 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:11.926 Bringing machine 'default' up with 'libvirt' provider... 00:02:12.492 ==> default: Creating image (snapshot of base box volume). 00:02:12.492 ==> default: Creating domain with the following settings... 00:02:12.492 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733825747_4bc8316325068f9d15b6 00:02:12.492 ==> default: -- Domain type: kvm 00:02:12.492 ==> default: -- Cpus: 10 00:02:12.492 ==> default: -- Feature: acpi 00:02:12.492 ==> default: -- Feature: apic 00:02:12.492 ==> default: -- Feature: pae 00:02:12.492 ==> default: -- Memory: 12288M 00:02:12.492 ==> default: -- Memory Backing: hugepages: 00:02:12.492 ==> default: -- Management MAC: 00:02:12.492 ==> default: -- Loader: 00:02:12.492 ==> default: -- Nvram: 00:02:12.492 ==> default: -- Base box: spdk/fedora39 00:02:12.492 ==> default: -- Storage pool: default 00:02:12.492 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733825747_4bc8316325068f9d15b6.img (20G) 00:02:12.492 ==> default: -- Volume Cache: default 00:02:12.492 ==> default: -- Kernel: 00:02:12.492 ==> default: -- Initrd: 00:02:12.492 ==> default: -- Graphics Type: vnc 00:02:12.492 ==> default: -- Graphics Port: -1 00:02:12.492 ==> default: -- Graphics IP: 127.0.0.1 00:02:12.492 ==> default: -- Graphics Password: Not defined 00:02:12.492 ==> default: -- Video Type: cirrus 00:02:12.492 ==> default: -- Video VRAM: 9216 00:02:12.492 ==> default: -- Sound Type: 00:02:12.492 ==> default: -- Keymap: en-us 00:02:12.492 ==> default: -- TPM Path: 00:02:12.492 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:12.492 ==> default: -- Command line args: 00:02:12.492 ==> default: -> value=-device, 00:02:12.492 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:12.492 ==> default: -> value=-drive, 00:02:12.492 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:12.492 ==> default: -> value=-device, 00:02:12.492 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.492 ==> default: -> value=-device, 00:02:12.492 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:12.492 ==> default: -> value=-drive, 00:02:12.492 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:12.492 ==> default: -> value=-device, 00:02:12.492 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.492 ==> default: -> value=-drive, 00:02:12.492 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:12.492 ==> default: -> value=-device, 00:02:12.492 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.492 ==> default: -> value=-drive, 00:02:12.492 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:12.492 ==> default: -> value=-device, 00:02:12.492 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.752 ==> default: Creating shared folders metadata... 00:02:12.752 ==> default: Starting domain. 00:02:14.132 ==> default: Waiting for domain to get an IP address... 00:02:32.217 ==> default: Waiting for SSH to become available... 00:02:32.217 ==> default: Configuring and enabling network interfaces... 00:02:34.776 default: SSH address: 192.168.121.68:22 00:02:34.776 default: SSH username: vagrant 00:02:34.776 default: SSH auth method: private key 00:02:36.688 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:43.252 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:49.814 ==> default: Mounting SSHFS shared folder... 00:02:51.214 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:51.214 ==> default: Checking Mount.. 00:02:52.150 ==> default: Folder Successfully Mounted! 00:02:52.150 ==> default: Running provisioner: file... 00:02:53.087 default: ~/.gitconfig => .gitconfig 00:02:53.346 00:02:53.346 SUCCESS! 00:02:53.346 00:02:53.346 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:53.346 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:53.346 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:53.346 00:02:53.356 [Pipeline] } 00:02:53.371 [Pipeline] // stage 00:02:53.379 [Pipeline] dir 00:02:53.379 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:53.380 [Pipeline] { 00:02:53.392 [Pipeline] catchError 00:02:53.394 [Pipeline] { 00:02:53.406 [Pipeline] sh 00:02:53.685 + vagrant ssh-config --host vagrant 00:02:53.685 + sed -ne /^Host/,$p 00:02:53.685 + tee ssh_conf 00:02:57.877 Host vagrant 00:02:57.877 HostName 192.168.121.68 00:02:57.877 User vagrant 00:02:57.877 Port 22 00:02:57.877 UserKnownHostsFile /dev/null 00:02:57.877 StrictHostKeyChecking no 00:02:57.877 PasswordAuthentication no 00:02:57.877 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:57.877 IdentitiesOnly yes 00:02:57.877 LogLevel FATAL 00:02:57.877 ForwardAgent yes 00:02:57.877 ForwardX11 yes 00:02:57.877 00:02:57.890 [Pipeline] withEnv 00:02:57.893 [Pipeline] { 00:02:57.904 [Pipeline] sh 00:02:58.184 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:58.184 source /etc/os-release 00:02:58.184 [[ -e /image.version ]] && img=$(< /image.version) 00:02:58.184 # Minimal, systemd-like check. 00:02:58.184 if [[ -e /.dockerenv ]]; then 00:02:58.184 # Clear garbage from the node's name: 00:02:58.184 # agt-er_autotest_547-896 -> autotest_547-896 00:02:58.184 # $HOSTNAME is the actual container id 00:02:58.184 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:58.184 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:58.184 # We can assume this is a mount from a host where container is running, 00:02:58.184 # so fetch its hostname to easily identify the target swarm worker. 00:02:58.184 container="$(< /etc/hostname) ($agent)" 00:02:58.184 else 00:02:58.184 # Fallback 00:02:58.184 container=$agent 00:02:58.184 fi 00:02:58.184 fi 00:02:58.184 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:58.184 00:02:58.195 [Pipeline] } 00:02:58.211 [Pipeline] // withEnv 00:02:58.219 [Pipeline] setCustomBuildProperty 00:02:58.234 [Pipeline] stage 00:02:58.236 [Pipeline] { (Tests) 00:02:58.252 [Pipeline] sh 00:02:58.532 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:58.806 [Pipeline] sh 00:02:59.086 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:59.360 [Pipeline] timeout 00:02:59.360 Timeout set to expire in 1 hr 0 min 00:02:59.362 [Pipeline] { 00:02:59.376 [Pipeline] sh 00:02:59.711 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:00.278 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:03:00.289 [Pipeline] sh 00:03:00.567 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:00.840 [Pipeline] sh 00:03:01.122 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:01.397 [Pipeline] sh 00:03:01.677 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:01.937 ++ readlink -f spdk_repo 00:03:01.937 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:01.937 + [[ -n /home/vagrant/spdk_repo ]] 00:03:01.937 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:01.937 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:01.937 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:01.937 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:01.937 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:01.937 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:01.937 + cd /home/vagrant/spdk_repo 00:03:01.937 + source /etc/os-release 00:03:01.937 ++ NAME='Fedora Linux' 00:03:01.937 ++ VERSION='39 (Cloud Edition)' 00:03:01.937 ++ ID=fedora 00:03:01.937 ++ VERSION_ID=39 00:03:01.937 ++ VERSION_CODENAME= 00:03:01.937 ++ PLATFORM_ID=platform:f39 00:03:01.937 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:01.937 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:01.937 ++ LOGO=fedora-logo-icon 00:03:01.937 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:01.937 ++ HOME_URL=https://fedoraproject.org/ 00:03:01.937 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:01.937 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:01.937 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:01.937 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:01.937 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:01.937 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:01.937 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:01.937 ++ SUPPORT_END=2024-11-12 00:03:01.937 ++ VARIANT='Cloud Edition' 00:03:01.937 ++ VARIANT_ID=cloud 00:03:01.937 + uname -a 00:03:01.937 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:01.937 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:02.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:02.196 Hugepages 00:03:02.196 node hugesize free / total 00:03:02.196 node0 1048576kB 0 / 0 00:03:02.455 node0 2048kB 0 / 0 00:03:02.455 00:03:02.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.455 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:02.455 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:02.455 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:02.455 + rm -f /tmp/spdk-ld-path 00:03:02.455 + source autorun-spdk.conf 00:03:02.455 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.455 ++ SPDK_TEST_NVMF=1 00:03:02.455 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:02.455 ++ SPDK_TEST_URING=1 00:03:02.455 ++ SPDK_TEST_VFIOUSER=1 00:03:02.455 ++ SPDK_TEST_USDT=1 00:03:02.455 ++ SPDK_RUN_UBSAN=1 00:03:02.455 ++ NET_TYPE=virt 00:03:02.455 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:02.455 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:02.455 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:02.455 ++ RUN_NIGHTLY=1 00:03:02.455 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:02.455 + [[ -n '' ]] 00:03:02.455 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:02.455 + for M in /var/spdk/build-*-manifest.txt 00:03:02.455 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:02.455 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:02.455 + for M in /var/spdk/build-*-manifest.txt 00:03:02.455 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:02.455 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:02.455 + for M in /var/spdk/build-*-manifest.txt 00:03:02.455 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:02.455 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:02.455 ++ uname 00:03:02.455 + [[ Linux == \L\i\n\u\x ]] 00:03:02.455 + sudo dmesg -T 00:03:02.455 + sudo dmesg --clear 00:03:02.455 + dmesg_pid=6009 00:03:02.455 + [[ Fedora Linux == FreeBSD ]] 00:03:02.455 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.455 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.455 + sudo dmesg -Tw 00:03:02.455 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:02.455 + [[ -x /usr/src/fio-static/fio ]] 00:03:02.455 + export FIO_BIN=/usr/src/fio-static/fio 00:03:02.455 + FIO_BIN=/usr/src/fio-static/fio 00:03:02.455 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:02.455 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:02.455 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:02.455 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.455 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.455 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:02.455 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.455 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.455 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.455 Test configuration: 00:03:02.455 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.455 SPDK_TEST_NVMF=1 00:03:02.455 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:02.455 SPDK_TEST_URING=1 00:03:02.455 SPDK_TEST_VFIOUSER=1 00:03:02.455 SPDK_TEST_USDT=1 00:03:02.455 SPDK_RUN_UBSAN=1 00:03:02.455 NET_TYPE=virt 00:03:02.455 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:02.455 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:02.455 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:02.714 RUN_NIGHTLY=1 10:16:37 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:02.714 10:16:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:02.715 10:16:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:02.715 10:16:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:02.715 10:16:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.715 10:16:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.715 10:16:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.715 10:16:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.715 10:16:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.715 10:16:37 -- paths/export.sh@5 -- $ export PATH 00:03:02.715 10:16:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.715 10:16:37 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:02.715 10:16:37 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:02.715 10:16:37 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733825797.XXXXXX 00:03:02.715 10:16:37 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733825797.vyxlyy 00:03:02.715 10:16:37 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:02.715 10:16:37 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:03:02.715 10:16:37 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:02.715 10:16:37 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:02.715 10:16:37 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:02.715 10:16:37 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:02.715 10:16:37 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:02.715 10:16:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:02.715 10:16:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.715 10:16:37 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:03:02.715 10:16:37 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:02.715 10:16:37 -- pm/common@17 -- $ local monitor 00:03:02.715 10:16:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.715 10:16:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.715 10:16:37 -- pm/common@25 -- $ sleep 1 00:03:02.715 10:16:37 -- pm/common@21 -- $ date +%s 00:03:02.715 10:16:37 -- pm/common@21 -- $ date +%s 00:03:02.715 10:16:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733825797 00:03:02.715 10:16:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733825797 00:03:02.715 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733825797_collect-vmstat.pm.log 00:03:02.715 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733825797_collect-cpu-load.pm.log 00:03:03.652 10:16:38 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:03.652 10:16:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.652 10:16:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.652 10:16:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:03.652 10:16:38 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.652 Tue Dec 10 10:16:38 AM UTC 2024 00:03:03.652 10:16:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:03.652 v24.09-1-gb18e1bd62 00:03:03.652 10:16:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:03.652 10:16:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:03.652 10:16:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:03.652 10:16:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:03.652 10:16:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:03.652 10:16:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.652 ************************************ 00:03:03.652 START TEST ubsan 00:03:03.652 ************************************ 00:03:03.652 using ubsan 00:03:03.652 10:16:38 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:03.652 00:03:03.652 real 0m0.000s 00:03:03.652 user 0m0.000s 00:03:03.652 sys 0m0.000s 00:03:03.652 10:16:38 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:03.652 10:16:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.652 ************************************ 00:03:03.652 END TEST ubsan 00:03:03.652 ************************************ 00:03:03.652 10:16:38 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:03.652 10:16:38 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:03.652 10:16:38 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:03.652 10:16:38 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:03.652 10:16:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:03.652 10:16:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.652 ************************************ 00:03:03.652 START TEST build_native_dpdk 00:03:03.652 ************************************ 00:03:03.652 10:16:38 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:03.652 eeb0605f11 version: 23.11.0 00:03:03.652 238778122a doc: update release notes for 23.11 00:03:03.652 46aa6b3cfc doc: fix description of RSS features 00:03:03.652 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:03.652 7e421ae345 devtools: support skipping forbid rule check 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:03.652 10:16:38 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.652 10:16:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:03.653 10:16:38 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:03.653 patching file config/rte_config.h 00:03:03.653 Hunk #1 succeeded at 60 (offset 1 line). 00:03:03.653 10:16:38 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:03.653 10:16:38 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:03.653 10:16:38 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:03.653 patching file lib/pcapng/rte_pcapng.c 00:03:03.912 10:16:38 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:03.912 10:16:38 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:03.912 10:16:38 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:03.912 10:16:38 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:03.912 10:16:38 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:03.912 10:16:38 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:03.912 10:16:38 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:09.184 The Meson build system 00:03:09.184 Version: 1.5.0 00:03:09.184 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:09.184 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:09.184 Build type: native build 00:03:09.184 Program cat found: YES (/usr/bin/cat) 00:03:09.184 Project name: DPDK 00:03:09.184 Project version: 23.11.0 00:03:09.184 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:09.184 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:09.184 Host machine cpu family: x86_64 00:03:09.184 Host machine cpu: x86_64 00:03:09.184 Message: ## Building in Developer Mode ## 00:03:09.184 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:09.184 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:09.184 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:09.184 Program python3 found: YES (/usr/bin/python3) 00:03:09.184 Program cat found: YES (/usr/bin/cat) 00:03:09.184 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:09.184 Compiler for C supports arguments -march=native: YES 00:03:09.184 Checking for size of "void *" : 8 00:03:09.184 Checking for size of "void *" : 8 (cached) 00:03:09.184 Library m found: YES 00:03:09.184 Library numa found: YES 00:03:09.184 Has header "numaif.h" : YES 00:03:09.184 Library fdt found: NO 00:03:09.184 Library execinfo found: NO 00:03:09.184 Has header "execinfo.h" : YES 00:03:09.184 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:09.184 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:09.184 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:09.184 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:09.184 Run-time dependency openssl found: YES 3.1.1 00:03:09.184 Run-time dependency libpcap found: YES 1.10.4 00:03:09.184 Has header "pcap.h" with dependency libpcap: YES 00:03:09.184 Compiler for C supports arguments -Wcast-qual: YES 00:03:09.184 Compiler for C supports arguments -Wdeprecated: YES 00:03:09.184 Compiler for C supports arguments -Wformat: YES 00:03:09.184 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:09.184 Compiler for C supports arguments -Wformat-security: NO 00:03:09.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:09.184 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:09.184 Compiler for C supports arguments -Wnested-externs: YES 00:03:09.184 Compiler for C supports arguments -Wold-style-definition: YES 00:03:09.184 Compiler for C supports arguments -Wpointer-arith: YES 00:03:09.184 Compiler for C supports arguments -Wsign-compare: YES 00:03:09.184 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:09.184 Compiler for C supports arguments -Wundef: YES 00:03:09.184 Compiler for C supports arguments -Wwrite-strings: YES 00:03:09.184 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:09.184 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:09.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:09.184 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:09.184 Program objdump found: YES (/usr/bin/objdump) 00:03:09.184 Compiler for C supports arguments -mavx512f: YES 00:03:09.184 Checking if "AVX512 checking" compiles: YES 00:03:09.184 Fetching value of define "__SSE4_2__" : 1 00:03:09.184 Fetching value of define "__AES__" : 1 00:03:09.184 Fetching value of define "__AVX__" : 1 00:03:09.184 Fetching value of define "__AVX2__" : 1 00:03:09.184 Fetching value of define "__AVX512BW__" : (undefined) 00:03:09.184 Fetching value of define "__AVX512CD__" : (undefined) 00:03:09.184 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:09.184 Fetching value of define "__AVX512F__" : (undefined) 00:03:09.184 Fetching value of define "__AVX512VL__" : (undefined) 00:03:09.184 Fetching value of define "__PCLMUL__" : 1 00:03:09.184 Fetching value of define "__RDRND__" : 1 00:03:09.184 Fetching value of define "__RDSEED__" : 1 00:03:09.184 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:09.184 Fetching value of define "__znver1__" : (undefined) 00:03:09.184 Fetching value of define "__znver2__" : (undefined) 00:03:09.184 Fetching value of define "__znver3__" : (undefined) 00:03:09.184 Fetching value of define "__znver4__" : (undefined) 00:03:09.184 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:09.184 Message: lib/log: Defining dependency "log" 00:03:09.184 Message: lib/kvargs: Defining dependency "kvargs" 00:03:09.184 Message: lib/telemetry: Defining dependency "telemetry" 00:03:09.184 Checking for function "getentropy" : NO 00:03:09.184 Message: lib/eal: Defining dependency "eal" 00:03:09.184 Message: lib/ring: Defining dependency "ring" 00:03:09.184 Message: lib/rcu: Defining dependency "rcu" 00:03:09.184 Message: lib/mempool: Defining dependency "mempool" 00:03:09.184 Message: lib/mbuf: Defining dependency "mbuf" 00:03:09.184 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:09.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:09.184 Compiler for C supports arguments -mpclmul: YES 00:03:09.184 Compiler for C supports arguments -maes: YES 00:03:09.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.184 Compiler for C supports arguments -mavx512bw: YES 00:03:09.184 Compiler for C supports arguments -mavx512dq: YES 00:03:09.184 Compiler for C supports arguments -mavx512vl: YES 00:03:09.184 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:09.184 Compiler for C supports arguments -mavx2: YES 00:03:09.184 Compiler for C supports arguments -mavx: YES 00:03:09.184 Message: lib/net: Defining dependency "net" 00:03:09.184 Message: lib/meter: Defining dependency "meter" 00:03:09.184 Message: lib/ethdev: Defining dependency "ethdev" 00:03:09.184 Message: lib/pci: Defining dependency "pci" 00:03:09.184 Message: lib/cmdline: Defining dependency "cmdline" 00:03:09.184 Message: lib/metrics: Defining dependency "metrics" 00:03:09.184 Message: lib/hash: Defining dependency "hash" 00:03:09.184 Message: lib/timer: Defining dependency "timer" 00:03:09.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:09.184 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:09.184 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:09.184 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:09.184 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:09.184 Message: lib/acl: Defining dependency "acl" 00:03:09.184 Message: lib/bbdev: Defining dependency "bbdev" 00:03:09.184 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:09.184 Run-time dependency libelf found: YES 0.191 00:03:09.185 Message: lib/bpf: Defining dependency "bpf" 00:03:09.185 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:09.185 Message: lib/compressdev: Defining dependency "compressdev" 00:03:09.185 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:09.185 Message: lib/distributor: Defining dependency "distributor" 00:03:09.185 Message: lib/dmadev: Defining dependency "dmadev" 00:03:09.185 Message: lib/efd: Defining dependency "efd" 00:03:09.185 Message: lib/eventdev: Defining dependency "eventdev" 00:03:09.185 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:09.185 Message: lib/gpudev: Defining dependency "gpudev" 00:03:09.185 Message: lib/gro: Defining dependency "gro" 00:03:09.185 Message: lib/gso: Defining dependency "gso" 00:03:09.185 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:09.185 Message: lib/jobstats: Defining dependency "jobstats" 00:03:09.185 Message: lib/latencystats: Defining dependency "latencystats" 00:03:09.185 Message: lib/lpm: Defining dependency "lpm" 00:03:09.185 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:09.185 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:09.185 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:09.185 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:09.185 Message: lib/member: Defining dependency "member" 00:03:09.185 Message: lib/pcapng: Defining dependency "pcapng" 00:03:09.185 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:09.185 Message: lib/power: Defining dependency "power" 00:03:09.185 Message: lib/rawdev: Defining dependency "rawdev" 00:03:09.185 Message: lib/regexdev: Defining dependency "regexdev" 00:03:09.185 Message: lib/mldev: Defining dependency "mldev" 00:03:09.185 Message: lib/rib: Defining dependency "rib" 00:03:09.185 Message: lib/reorder: Defining dependency "reorder" 00:03:09.185 Message: lib/sched: Defining dependency "sched" 00:03:09.185 Message: lib/security: Defining dependency "security" 00:03:09.185 Message: lib/stack: Defining dependency "stack" 00:03:09.185 Has header "linux/userfaultfd.h" : YES 00:03:09.185 Has header "linux/vduse.h" : YES 00:03:09.185 Message: lib/vhost: Defining dependency "vhost" 00:03:09.185 Message: lib/ipsec: Defining dependency "ipsec" 00:03:09.185 Message: lib/pdcp: Defining dependency "pdcp" 00:03:09.185 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:09.185 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:09.185 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:09.185 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:09.185 Message: lib/fib: Defining dependency "fib" 00:03:09.185 Message: lib/port: Defining dependency "port" 00:03:09.185 Message: lib/pdump: Defining dependency "pdump" 00:03:09.185 Message: lib/table: Defining dependency "table" 00:03:09.185 Message: lib/pipeline: Defining dependency "pipeline" 00:03:09.185 Message: lib/graph: Defining dependency "graph" 00:03:09.185 Message: lib/node: Defining dependency "node" 00:03:09.185 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.095 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.095 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.095 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.095 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:11.095 Compiler for C supports arguments -Wno-unused-value: YES 00:03:11.095 Compiler for C supports arguments -Wno-format: YES 00:03:11.095 Compiler for C supports arguments -Wno-format-security: YES 00:03:11.095 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:11.095 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:11.095 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:11.095 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:11.095 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:11.095 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.095 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:11.095 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:11.095 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:11.095 Has header "sys/epoll.h" : YES 00:03:11.095 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:11.095 Configuring doxy-api-html.conf using configuration 00:03:11.095 Configuring doxy-api-man.conf using configuration 00:03:11.095 Program mandb found: YES (/usr/bin/mandb) 00:03:11.095 Program sphinx-build found: NO 00:03:11.095 Configuring rte_build_config.h using configuration 00:03:11.095 Message: 00:03:11.095 ================= 00:03:11.095 Applications Enabled 00:03:11.095 ================= 00:03:11.095 00:03:11.095 apps: 00:03:11.095 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:11.095 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:11.095 test-pmd, test-regex, test-sad, test-security-perf, 00:03:11.095 00:03:11.095 Message: 00:03:11.095 ================= 00:03:11.095 Libraries Enabled 00:03:11.095 ================= 00:03:11.095 00:03:11.095 libs: 00:03:11.095 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.095 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:11.095 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:11.095 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:11.095 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:11.095 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:11.095 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:11.095 00:03:11.095 00:03:11.095 Message: 00:03:11.095 =============== 00:03:11.095 Drivers Enabled 00:03:11.095 =============== 00:03:11.095 00:03:11.095 common: 00:03:11.095 00:03:11.095 bus: 00:03:11.095 pci, vdev, 00:03:11.095 mempool: 00:03:11.095 ring, 00:03:11.095 dma: 00:03:11.095 00:03:11.095 net: 00:03:11.095 i40e, 00:03:11.095 raw: 00:03:11.095 00:03:11.095 crypto: 00:03:11.095 00:03:11.095 compress: 00:03:11.095 00:03:11.095 regex: 00:03:11.095 00:03:11.095 ml: 00:03:11.095 00:03:11.095 vdpa: 00:03:11.095 00:03:11.095 event: 00:03:11.095 00:03:11.095 baseband: 00:03:11.095 00:03:11.095 gpu: 00:03:11.095 00:03:11.095 00:03:11.095 Message: 00:03:11.095 ================= 00:03:11.095 Content Skipped 00:03:11.095 ================= 00:03:11.095 00:03:11.095 apps: 00:03:11.095 00:03:11.095 libs: 00:03:11.095 00:03:11.095 drivers: 00:03:11.095 common/cpt: not in enabled drivers build config 00:03:11.095 common/dpaax: not in enabled drivers build config 00:03:11.095 common/iavf: not in enabled drivers build config 00:03:11.095 common/idpf: not in enabled drivers build config 00:03:11.095 common/mvep: not in enabled drivers build config 00:03:11.095 common/octeontx: not in enabled drivers build config 00:03:11.095 bus/auxiliary: not in enabled drivers build config 00:03:11.095 bus/cdx: not in enabled drivers build config 00:03:11.095 bus/dpaa: not in enabled drivers build config 00:03:11.096 bus/fslmc: not in enabled drivers build config 00:03:11.096 bus/ifpga: not in enabled drivers build config 00:03:11.096 bus/platform: not in enabled drivers build config 00:03:11.096 bus/vmbus: not in enabled drivers build config 00:03:11.096 common/cnxk: not in enabled drivers build config 00:03:11.096 common/mlx5: not in enabled drivers build config 00:03:11.096 common/nfp: not in enabled drivers build config 00:03:11.096 common/qat: not in enabled drivers build config 00:03:11.096 common/sfc_efx: not in enabled drivers build config 00:03:11.096 mempool/bucket: not in enabled drivers build config 00:03:11.096 mempool/cnxk: not in enabled drivers build config 00:03:11.096 mempool/dpaa: not in enabled drivers build config 00:03:11.096 mempool/dpaa2: not in enabled drivers build config 00:03:11.096 mempool/octeontx: not in enabled drivers build config 00:03:11.096 mempool/stack: not in enabled drivers build config 00:03:11.096 dma/cnxk: not in enabled drivers build config 00:03:11.096 dma/dpaa: not in enabled drivers build config 00:03:11.096 dma/dpaa2: not in enabled drivers build config 00:03:11.096 dma/hisilicon: not in enabled drivers build config 00:03:11.096 dma/idxd: not in enabled drivers build config 00:03:11.096 dma/ioat: not in enabled drivers build config 00:03:11.096 dma/skeleton: not in enabled drivers build config 00:03:11.096 net/af_packet: not in enabled drivers build config 00:03:11.096 net/af_xdp: not in enabled drivers build config 00:03:11.096 net/ark: not in enabled drivers build config 00:03:11.096 net/atlantic: not in enabled drivers build config 00:03:11.096 net/avp: not in enabled drivers build config 00:03:11.096 net/axgbe: not in enabled drivers build config 00:03:11.096 net/bnx2x: not in enabled drivers build config 00:03:11.096 net/bnxt: not in enabled drivers build config 00:03:11.096 net/bonding: not in enabled drivers build config 00:03:11.096 net/cnxk: not in enabled drivers build config 00:03:11.096 net/cpfl: not in enabled drivers build config 00:03:11.096 net/cxgbe: not in enabled drivers build config 00:03:11.096 net/dpaa: not in enabled drivers build config 00:03:11.096 net/dpaa2: not in enabled drivers build config 00:03:11.096 net/e1000: not in enabled drivers build config 00:03:11.096 net/ena: not in enabled drivers build config 00:03:11.096 net/enetc: not in enabled drivers build config 00:03:11.096 net/enetfec: not in enabled drivers build config 00:03:11.096 net/enic: not in enabled drivers build config 00:03:11.096 net/failsafe: not in enabled drivers build config 00:03:11.096 net/fm10k: not in enabled drivers build config 00:03:11.096 net/gve: not in enabled drivers build config 00:03:11.096 net/hinic: not in enabled drivers build config 00:03:11.096 net/hns3: not in enabled drivers build config 00:03:11.096 net/iavf: not in enabled drivers build config 00:03:11.096 net/ice: not in enabled drivers build config 00:03:11.096 net/idpf: not in enabled drivers build config 00:03:11.096 net/igc: not in enabled drivers build config 00:03:11.096 net/ionic: not in enabled drivers build config 00:03:11.096 net/ipn3ke: not in enabled drivers build config 00:03:11.096 net/ixgbe: not in enabled drivers build config 00:03:11.096 net/mana: not in enabled drivers build config 00:03:11.096 net/memif: not in enabled drivers build config 00:03:11.096 net/mlx4: not in enabled drivers build config 00:03:11.096 net/mlx5: not in enabled drivers build config 00:03:11.096 net/mvneta: not in enabled drivers build config 00:03:11.096 net/mvpp2: not in enabled drivers build config 00:03:11.096 net/netvsc: not in enabled drivers build config 00:03:11.096 net/nfb: not in enabled drivers build config 00:03:11.096 net/nfp: not in enabled drivers build config 00:03:11.096 net/ngbe: not in enabled drivers build config 00:03:11.096 net/null: not in enabled drivers build config 00:03:11.096 net/octeontx: not in enabled drivers build config 00:03:11.096 net/octeon_ep: not in enabled drivers build config 00:03:11.096 net/pcap: not in enabled drivers build config 00:03:11.096 net/pfe: not in enabled drivers build config 00:03:11.096 net/qede: not in enabled drivers build config 00:03:11.096 net/ring: not in enabled drivers build config 00:03:11.096 net/sfc: not in enabled drivers build config 00:03:11.096 net/softnic: not in enabled drivers build config 00:03:11.096 net/tap: not in enabled drivers build config 00:03:11.096 net/thunderx: not in enabled drivers build config 00:03:11.096 net/txgbe: not in enabled drivers build config 00:03:11.096 net/vdev_netvsc: not in enabled drivers build config 00:03:11.096 net/vhost: not in enabled drivers build config 00:03:11.096 net/virtio: not in enabled drivers build config 00:03:11.096 net/vmxnet3: not in enabled drivers build config 00:03:11.096 raw/cnxk_bphy: not in enabled drivers build config 00:03:11.096 raw/cnxk_gpio: not in enabled drivers build config 00:03:11.096 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:11.096 raw/ifpga: not in enabled drivers build config 00:03:11.096 raw/ntb: not in enabled drivers build config 00:03:11.096 raw/skeleton: not in enabled drivers build config 00:03:11.096 crypto/armv8: not in enabled drivers build config 00:03:11.096 crypto/bcmfs: not in enabled drivers build config 00:03:11.096 crypto/caam_jr: not in enabled drivers build config 00:03:11.096 crypto/ccp: not in enabled drivers build config 00:03:11.096 crypto/cnxk: not in enabled drivers build config 00:03:11.096 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.096 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.096 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.096 crypto/mlx5: not in enabled drivers build config 00:03:11.096 crypto/mvsam: not in enabled drivers build config 00:03:11.096 crypto/nitrox: not in enabled drivers build config 00:03:11.096 crypto/null: not in enabled drivers build config 00:03:11.096 crypto/octeontx: not in enabled drivers build config 00:03:11.096 crypto/openssl: not in enabled drivers build config 00:03:11.096 crypto/scheduler: not in enabled drivers build config 00:03:11.096 crypto/uadk: not in enabled drivers build config 00:03:11.096 crypto/virtio: not in enabled drivers build config 00:03:11.096 compress/isal: not in enabled drivers build config 00:03:11.096 compress/mlx5: not in enabled drivers build config 00:03:11.096 compress/octeontx: not in enabled drivers build config 00:03:11.096 compress/zlib: not in enabled drivers build config 00:03:11.096 regex/mlx5: not in enabled drivers build config 00:03:11.096 regex/cn9k: not in enabled drivers build config 00:03:11.096 ml/cnxk: not in enabled drivers build config 00:03:11.096 vdpa/ifc: not in enabled drivers build config 00:03:11.096 vdpa/mlx5: not in enabled drivers build config 00:03:11.096 vdpa/nfp: not in enabled drivers build config 00:03:11.096 vdpa/sfc: not in enabled drivers build config 00:03:11.096 event/cnxk: not in enabled drivers build config 00:03:11.096 event/dlb2: not in enabled drivers build config 00:03:11.096 event/dpaa: not in enabled drivers build config 00:03:11.096 event/dpaa2: not in enabled drivers build config 00:03:11.096 event/dsw: not in enabled drivers build config 00:03:11.096 event/opdl: not in enabled drivers build config 00:03:11.096 event/skeleton: not in enabled drivers build config 00:03:11.096 event/sw: not in enabled drivers build config 00:03:11.096 event/octeontx: not in enabled drivers build config 00:03:11.096 baseband/acc: not in enabled drivers build config 00:03:11.096 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:11.096 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:11.096 baseband/la12xx: not in enabled drivers build config 00:03:11.096 baseband/null: not in enabled drivers build config 00:03:11.096 baseband/turbo_sw: not in enabled drivers build config 00:03:11.096 gpu/cuda: not in enabled drivers build config 00:03:11.096 00:03:11.096 00:03:11.096 Build targets in project: 220 00:03:11.096 00:03:11.096 DPDK 23.11.0 00:03:11.096 00:03:11.096 User defined options 00:03:11.096 libdir : lib 00:03:11.096 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:11.096 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:11.096 c_link_args : 00:03:11.096 enable_docs : false 00:03:11.096 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:11.096 enable_kmods : false 00:03:11.096 machine : native 00:03:11.096 tests : false 00:03:11.096 00:03:11.096 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.096 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:11.096 10:16:46 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:11.096 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:11.355 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:11.355 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:11.355 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:11.355 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.355 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.355 [6/710] Linking static target lib/librte_kvargs.a 00:03:11.355 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:11.355 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:11.355 [9/710] Linking static target lib/librte_log.a 00:03:11.355 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.613 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.871 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:11.871 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.871 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:11.871 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:11.871 [16/710] Linking target lib/librte_log.so.24.0 00:03:11.871 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:12.129 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:12.129 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:12.129 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:12.129 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:12.387 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:12.387 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:12.387 [24/710] Linking target lib/librte_kvargs.so.24.0 00:03:12.387 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:12.646 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:12.646 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:12.646 [28/710] Linking static target lib/librte_telemetry.a 00:03:12.646 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:12.646 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:12.646 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:12.646 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:12.904 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:12.904 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.904 [35/710] Linking target lib/librte_telemetry.so.24.0 00:03:12.904 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:12.904 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:12.904 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:13.163 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:13.163 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:13.163 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:13.163 [42/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:13.163 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:13.163 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:13.421 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:13.421 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:13.421 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:13.680 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:13.680 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:13.680 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:13.938 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:13.938 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:13.938 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:13.938 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:13.938 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:13.938 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:14.197 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:14.197 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.197 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:14.197 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:14.197 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:14.197 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:14.455 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:14.455 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:14.455 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:14.455 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:14.455 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:14.713 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:14.713 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:14.970 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:14.970 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:14.970 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:14.970 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:14.970 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:14.970 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:14.970 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:14.970 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:15.228 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:15.228 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:15.486 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:15.486 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:15.486 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:15.486 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:15.745 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:15.745 [85/710] Linking static target lib/librte_ring.a 00:03:15.745 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:15.745 [87/710] Linking static target lib/librte_eal.a 00:03:16.003 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:16.003 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:16.003 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.261 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:16.261 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:16.261 [93/710] Linking static target lib/librte_mempool.a 00:03:16.261 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:16.261 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:16.520 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:16.520 [97/710] Linking static target lib/librte_rcu.a 00:03:16.520 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:16.520 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:16.778 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:16.778 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.778 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:16.778 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.778 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:17.037 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:17.037 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:17.296 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:17.296 [108/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:17.296 [109/710] Linking static target lib/librte_mbuf.a 00:03:17.296 [110/710] Linking static target lib/librte_net.a 00:03:17.296 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:17.296 [112/710] Linking static target lib/librte_meter.a 00:03:17.555 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.555 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:17.555 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:17.555 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:17.555 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.555 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:17.813 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.381 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:18.381 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:18.640 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:18.640 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:18.640 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:18.640 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:18.640 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:18.640 [127/710] Linking static target lib/librte_pci.a 00:03:18.898 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:18.898 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:18.898 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.898 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:18.898 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:19.157 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:19.157 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:19.157 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:19.157 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:19.157 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:19.157 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:19.157 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:19.157 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:19.416 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:19.416 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:19.416 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:19.416 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:19.416 [145/710] Linking static target lib/librte_cmdline.a 00:03:19.673 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:19.931 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:19.931 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:19.931 [149/710] Linking static target lib/librte_metrics.a 00:03:19.931 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:20.189 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.447 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.447 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:20.447 [154/710] Linking static target lib/librte_timer.a 00:03:20.447 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.706 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.273 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:21.274 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:21.274 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:21.274 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:21.841 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:21.841 [162/710] Linking static target lib/librte_ethdev.a 00:03:21.841 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:21.841 [164/710] Linking static target lib/librte_bitratestats.a 00:03:22.100 [165/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:22.100 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:22.100 [167/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.100 [168/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.358 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:22.358 [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:22.358 [171/710] Linking static target lib/librte_bbdev.a 00:03:22.358 [172/710] Linking static target lib/librte_hash.a 00:03:22.358 [173/710] Linking target lib/librte_eal.so.24.0 00:03:22.358 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:22.358 [175/710] Linking target lib/librte_ring.so.24.0 00:03:22.617 [176/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:22.617 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:22.617 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:22.617 [179/710] Linking target lib/librte_meter.so.24.0 00:03:22.617 [180/710] Linking target lib/librte_rcu.so.24.0 00:03:22.617 [181/710] Linking target lib/librte_mempool.so.24.0 00:03:22.617 [182/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:22.876 [183/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:22.876 [184/710] Linking target lib/librte_pci.so.24.0 00:03:22.876 [185/710] Linking target lib/librte_timer.so.24.0 00:03:22.876 [186/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:22.876 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:22.876 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:03:22.876 [189/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.876 [190/710] Linking target lib/librte_mbuf.so.24.0 00:03:22.876 [191/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:22.876 [192/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.876 [193/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:22.876 [194/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:22.876 [195/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:23.134 [196/710] Linking target lib/librte_net.so.24.0 00:03:23.134 [197/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:23.134 [198/710] Linking static target lib/acl/libavx512_tmp.a 00:03:23.134 [199/710] Linking target lib/librte_bbdev.so.24.0 00:03:23.134 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:23.134 [201/710] Linking target lib/librte_cmdline.so.24.0 00:03:23.134 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:23.134 [203/710] Linking static target lib/librte_acl.a 00:03:23.134 [204/710] Linking target lib/librte_hash.so.24.0 00:03:23.393 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:23.393 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:23.393 [207/710] Linking static target lib/librte_cfgfile.a 00:03:23.393 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:23.651 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.651 [210/710] Linking target lib/librte_acl.so.24.0 00:03:23.651 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:23.651 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:23.651 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:23.910 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.910 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:03:23.910 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:24.168 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:24.168 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:24.168 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:24.427 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:24.427 [221/710] Linking static target lib/librte_bpf.a 00:03:24.427 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:24.427 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:24.427 [224/710] Linking static target lib/librte_compressdev.a 00:03:24.686 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.686 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:24.686 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:24.944 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:24.944 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:24.944 [230/710] Linking static target lib/librte_distributor.a 00:03:24.944 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.944 [232/710] Linking target lib/librte_compressdev.so.24.0 00:03:24.944 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:25.203 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.203 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:25.203 [236/710] Linking static target lib/librte_dmadev.a 00:03:25.203 [237/710] Linking target lib/librte_distributor.so.24.0 00:03:25.203 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:25.770 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.770 [240/710] Linking target lib/librte_dmadev.so.24.0 00:03:25.770 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:25.770 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:26.029 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:26.029 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:26.288 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:26.288 [246/710] Linking static target lib/librte_efd.a 00:03:26.288 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:26.288 [248/710] Linking static target lib/librte_cryptodev.a 00:03:26.546 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:26.546 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.546 [251/710] Linking target lib/librte_efd.so.24.0 00:03:26.820 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.820 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:26.820 [254/710] Linking static target lib/librte_dispatcher.a 00:03:26.820 [255/710] Linking target lib/librte_ethdev.so.24.0 00:03:26.820 [256/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:27.090 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:27.090 [258/710] Linking target lib/librte_metrics.so.24.0 00:03:27.090 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:27.090 [260/710] Linking target lib/librte_bpf.so.24.0 00:03:27.090 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:27.090 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:03:27.349 [263/710] Linking static target lib/librte_gpudev.a 00:03:27.349 [264/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:27.349 [265/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.349 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:27.349 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:27.349 [268/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:27.607 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.607 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:27.607 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:27.607 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:03:27.865 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:28.124 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.124 [275/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:28.124 [276/710] Linking target lib/librte_gpudev.so.24.0 00:03:28.124 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:28.124 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:28.124 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:28.124 [280/710] Linking static target lib/librte_gro.a 00:03:28.124 [281/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:28.124 [282/710] Linking static target lib/librte_eventdev.a 00:03:28.383 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:28.383 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:28.383 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:28.383 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.383 [287/710] Linking target lib/librte_gro.so.24.0 00:03:28.383 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:28.642 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:28.642 [290/710] Linking static target lib/librte_gso.a 00:03:28.902 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:28.902 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.902 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:28.902 [294/710] Linking target lib/librte_gso.so.24.0 00:03:28.902 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:28.902 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:29.223 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:29.224 [298/710] Linking static target lib/librte_jobstats.a 00:03:29.224 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:29.224 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:29.224 [301/710] Linking static target lib/librte_ip_frag.a 00:03:29.224 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:29.224 [303/710] Linking static target lib/librte_latencystats.a 00:03:29.482 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.482 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:29.482 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.482 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.482 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:03:29.482 [309/710] Linking target lib/librte_latencystats.so.24.0 00:03:29.740 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:29.740 [311/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:29.740 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:29.740 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:29.740 [314/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:29.740 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:29.741 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:30.004 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:30.264 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.264 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:30.264 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:30.264 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:30.264 [322/710] Linking static target lib/librte_lpm.a 00:03:30.523 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:30.523 [324/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:30.523 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:03:30.523 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:30.523 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:30.523 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:30.523 [329/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:30.523 [330/710] Linking static target lib/librte_pcapng.a 00:03:30.523 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:30.782 [332/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.782 [333/710] Linking target lib/librte_lpm.so.24.0 00:03:30.782 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.782 [335/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:30.782 [336/710] Linking target lib/librte_pcapng.so.24.0 00:03:31.041 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:31.041 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:31.041 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:31.300 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:31.300 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:31.300 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:31.300 [343/710] Linking static target lib/librte_power.a 00:03:31.300 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:31.560 [345/710] Linking static target lib/librte_regexdev.a 00:03:31.560 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:31.560 [347/710] Linking static target lib/librte_member.a 00:03:31.560 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:31.560 [349/710] Linking static target lib/librte_rawdev.a 00:03:31.560 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:31.560 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:31.819 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:31.819 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.819 [354/710] Linking target lib/librte_member.so.24.0 00:03:31.819 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:31.819 [356/710] Linking static target lib/librte_mldev.a 00:03:31.819 [357/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.078 [358/710] Linking target lib/librte_power.so.24.0 00:03:32.078 [359/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.078 [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:32.078 [361/710] Linking target lib/librte_rawdev.so.24.0 00:03:32.078 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:32.078 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.337 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:32.337 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:32.596 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:32.596 [367/710] Linking static target lib/librte_reorder.a 00:03:32.596 [368/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:32.596 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:32.596 [370/710] Linking static target lib/librte_rib.a 00:03:32.596 [371/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:32.596 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:32.596 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:32.855 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:32.855 [375/710] Linking static target lib/librte_stack.a 00:03:32.855 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.855 [377/710] Linking target lib/librte_reorder.so.24.0 00:03:33.114 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:33.114 [379/710] Linking static target lib/librte_security.a 00:03:33.114 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:33.114 [381/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.114 [382/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.114 [383/710] Linking target lib/librte_stack.so.24.0 00:03:33.114 [384/710] Linking target lib/librte_rib.so.24.0 00:03:33.114 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.114 [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:33.114 [387/710] Linking target lib/librte_mldev.so.24.0 00:03:33.373 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.373 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:33.373 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:33.373 [391/710] Linking target lib/librte_security.so.24.0 00:03:33.631 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:33.631 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:33.631 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:33.631 [395/710] Linking static target lib/librte_sched.a 00:03:33.890 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:34.148 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.148 [398/710] Linking target lib/librte_sched.so.24.0 00:03:34.148 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:34.148 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:34.407 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:34.407 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:34.666 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:34.666 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:34.924 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:35.182 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:35.182 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:35.182 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:35.182 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:35.440 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:35.441 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:35.441 [412/710] Linking static target lib/librte_ipsec.a 00:03:35.699 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:35.699 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:35.699 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:35.957 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.957 [417/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:35.957 [418/710] Linking target lib/librte_ipsec.so.24.0 00:03:35.957 [419/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:35.957 [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:35.957 [421/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:35.957 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:35.957 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:36.891 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:36.891 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:36.891 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:36.891 [427/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:36.891 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:36.891 [429/710] Linking static target lib/librte_pdcp.a 00:03:36.891 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:36.891 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:36.891 [432/710] Linking static target lib/librte_fib.a 00:03:37.150 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.408 [434/710] Linking target lib/librte_pdcp.so.24.0 00:03:37.408 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.408 [436/710] Linking target lib/librte_fib.so.24.0 00:03:37.667 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:37.925 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:37.925 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:38.183 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:38.183 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:38.183 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:38.183 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:38.439 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:38.696 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:38.697 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:38.697 [447/710] Linking static target lib/librte_port.a 00:03:38.955 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:38.955 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:38.955 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:38.955 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:39.213 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.213 [453/710] Linking target lib/librte_port.so.24.0 00:03:39.213 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:39.213 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:39.213 [456/710] Linking static target lib/librte_pdump.a 00:03:39.213 [457/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:39.213 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:39.471 [459/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:39.471 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.471 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:39.471 [462/710] Linking target lib/librte_pdump.so.24.0 00:03:40.038 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:40.038 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:40.038 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:40.038 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:40.296 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:40.296 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:40.554 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:40.554 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:40.554 [471/710] Linking static target lib/librte_table.a 00:03:40.812 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:40.812 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:41.379 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.379 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:41.379 [476/710] Linking target lib/librte_table.so.24.0 00:03:41.379 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:41.379 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:41.637 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:41.637 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:41.895 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:42.177 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:42.177 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:42.177 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:42.436 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:42.436 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:42.695 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:42.953 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:42.953 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:42.953 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:42.953 [491/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:42.953 [492/710] Linking static target lib/librte_graph.a 00:03:43.212 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:43.470 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:43.728 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:43.728 [496/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.728 [497/710] Linking target lib/librte_graph.so.24.0 00:03:43.728 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:43.987 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:43.987 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:44.245 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:44.245 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:44.245 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:44.245 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:44.504 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:44.504 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:44.762 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:44.762 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:45.020 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:45.020 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:45.020 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:45.278 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:45.278 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:45.278 [514/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:45.278 [515/710] Linking static target lib/librte_node.a 00:03:45.536 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.536 [517/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:45.536 [518/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:45.536 [519/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:45.536 [520/710] Linking target lib/librte_node.so.24.0 00:03:45.794 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:45.794 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:46.053 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:46.053 [524/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:46.053 [525/710] Linking static target drivers/librte_bus_pci.a 00:03:46.053 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:46.053 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:46.053 [528/710] Linking static target drivers/librte_bus_vdev.a 00:03:46.053 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:46.053 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:46.311 [531/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.311 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:46.311 [533/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:46.311 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:46.311 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:46.311 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:46.311 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:46.570 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.570 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:46.570 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:46.570 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:46.570 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:46.570 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:46.570 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:46.828 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:46.828 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:47.394 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:47.394 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:47.652 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:47.652 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:47.652 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:48.587 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:48.587 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:48.587 [554/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:48.587 [555/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:48.587 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:48.587 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:49.153 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:49.153 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:49.411 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:49.411 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:49.669 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:49.927 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:50.184 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:50.184 [565/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:50.184 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:50.751 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:50.751 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:51.008 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:51.008 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:51.008 [571/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:51.008 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:51.008 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:51.266 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:51.266 [575/710] Linking static target lib/librte_vhost.a 00:03:51.266 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:51.524 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:51.524 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:51.524 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:51.781 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:51.781 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:51.781 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:52.039 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:52.039 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:52.039 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:52.039 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:52.039 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:52.297 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:52.297 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:52.297 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:52.297 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:52.297 [592/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.297 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:52.554 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:52.812 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.812 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:52.812 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:53.070 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:53.070 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:53.327 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:53.585 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:53.585 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:53.585 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:53.585 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:53.844 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:53.844 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:54.102 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:54.361 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:54.361 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:54.619 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:54.619 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:54.619 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:54.619 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:54.877 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:54.877 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:54.877 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:54.877 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:55.135 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:55.394 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:55.394 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:55.652 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:55.652 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:55.652 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:56.588 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:56.588 [625/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:56.588 [626/710] Linking static target lib/librte_pipeline.a 00:03:56.588 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:56.588 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:56.847 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:56.847 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:57.105 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:57.105 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:57.105 [633/710] Linking target app/dpdk-dumpcap 00:03:57.105 [634/710] Linking target app/dpdk-graph 00:03:57.105 [635/710] Linking target app/dpdk-pdump 00:03:57.105 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:57.364 [637/710] Linking target app/dpdk-proc-info 00:03:57.364 [638/710] Linking target app/dpdk-test-acl 00:03:57.622 [639/710] Linking target app/dpdk-test-cmdline 00:03:57.622 [640/710] Linking target app/dpdk-test-compress-perf 00:03:57.622 [641/710] Linking target app/dpdk-test-crypto-perf 00:03:57.622 [642/710] Linking target app/dpdk-test-dma-perf 00:03:57.622 [643/710] Linking target app/dpdk-test-fib 00:03:57.622 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:57.881 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:58.139 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:58.139 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:58.139 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:58.397 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:58.397 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:58.397 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:58.397 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:58.656 [653/710] Linking target app/dpdk-test-gpudev 00:03:58.914 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:58.914 [655/710] Linking target app/dpdk-test-eventdev 00:03:58.914 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:58.914 [657/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:58.914 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:59.172 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:59.430 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:59.430 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:59.430 [662/710] Linking target app/dpdk-test-flow-perf 00:03:59.430 [663/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.430 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:59.430 [665/710] Linking target lib/librte_pipeline.so.24.0 00:03:59.688 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:59.689 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:59.689 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:59.950 [669/710] Linking target app/dpdk-test-bbdev 00:03:59.950 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:59.950 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:00.210 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:00.210 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:00.468 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:00.468 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:00.468 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:00.468 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:01.034 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:01.034 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:01.034 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:01.034 [681/710] Linking target app/dpdk-test-pipeline 00:04:01.035 [682/710] Linking target app/dpdk-test-mldev 00:04:01.297 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:01.898 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:01.898 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:01.898 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:01.898 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:01.898 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:02.156 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:02.156 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:02.414 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:02.414 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:02.673 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:02.932 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:03.191 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:03.191 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:03.450 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:03.709 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:03.709 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:03.709 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:03.709 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:03.709 [702/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:03.968 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:03.968 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:03.968 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:04.228 [706/710] Linking target app/dpdk-test-regex 00:04:04.228 [707/710] Linking target app/dpdk-test-sad 00:04:04.487 [708/710] Linking target app/dpdk-testpmd 00:04:04.487 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:05.054 [710/710] Linking target app/dpdk-test-security-perf 00:04:05.054 10:17:40 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:04:05.054 10:17:40 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:05.054 10:17:40 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:05.054 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:05.054 [0/1] Installing files. 00:04:05.320 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.320 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.322 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.323 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:05.324 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:05.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:05.325 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.325 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.584 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.585 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:05.847 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:05.847 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:05.847 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:05.847 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:05.847 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.848 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:05.850 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:05.850 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:05.850 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:05.850 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:05.850 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:05.850 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:05.850 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:05.850 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:05.850 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:05.850 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:05.850 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:05.850 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:05.850 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:05.850 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:05.850 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:05.850 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:05.850 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:05.850 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:05.850 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:05.850 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:05.850 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:05.850 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:05.850 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:05.850 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:05.850 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:05.850 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:05.850 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:05.850 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:05.850 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:05.850 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:05.850 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:05.850 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:05.850 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:05.850 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:05.850 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:05.850 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:05.850 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:05.850 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:05.850 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:05.850 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:05.850 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:05.850 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:05.850 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:05.850 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:05.850 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:05.850 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:05.850 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:05.850 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:05.850 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:05.850 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:05.850 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:05.850 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:05.850 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:05.850 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:05.850 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:05.850 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:05.850 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:05.850 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:05.850 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:05.850 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:05.850 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:05.850 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:05.850 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:05.850 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:05.850 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:05.850 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:05.850 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:05.850 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:05.851 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:05.851 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:05.851 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:05.851 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:05.851 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:05.851 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:05.851 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:05.851 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:05.851 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:05.851 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:05.851 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:05.851 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:05.851 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:05.851 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:05.851 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:05.851 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:05.851 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:05.851 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:05.851 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:05.851 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:05.851 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:05.851 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:05.851 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:05.851 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:05.851 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:05.851 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:05.851 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:05.851 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:05.851 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:05.851 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:05.851 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:05.851 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:05.851 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:05.851 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:05.851 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:05.851 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:05.851 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:05.851 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:05.851 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:05.851 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:05.851 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:05.851 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:05.851 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:05.851 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:05.851 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:05.851 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:05.851 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:05.851 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:05.851 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:05.851 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:05.851 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:05.851 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:05.851 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:05.851 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:05.851 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:05.851 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:05.851 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:05.851 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:05.851 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:05.851 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:05.851 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:05.851 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:05.851 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:05.851 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:05.851 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:05.851 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:05.851 10:17:41 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:04:05.851 10:17:41 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:05.851 00:04:05.851 real 1m2.223s 00:04:05.851 user 7m38.754s 00:04:05.851 sys 1m5.214s 00:04:05.851 ************************************ 00:04:05.851 END TEST build_native_dpdk 00:04:05.851 ************************************ 00:04:05.851 10:17:41 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:05.851 10:17:41 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:05.851 10:17:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:05.851 10:17:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:05.851 10:17:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:05.851 10:17:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:06.111 10:17:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:06.111 10:17:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:06.111 10:17:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:06.111 10:17:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:04:06.111 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:06.111 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:06.111 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:06.370 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:06.628 Using 'verbs' RDMA provider 00:04:19.775 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:34.658 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:34.658 Creating mk/config.mk...done. 00:04:34.658 Creating mk/cc.flags.mk...done. 00:04:34.658 Type 'make' to build. 00:04:34.658 10:18:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:34.658 10:18:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:34.658 10:18:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:34.658 10:18:08 -- common/autotest_common.sh@10 -- $ set +x 00:04:34.658 ************************************ 00:04:34.658 START TEST make 00:04:34.658 ************************************ 00:04:34.658 10:18:08 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:34.658 make[1]: Nothing to be done for 'all'. 00:04:34.658 The Meson build system 00:04:34.658 Version: 1.5.0 00:04:34.658 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:34.658 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:34.658 Build type: native build 00:04:34.658 Project name: libvfio-user 00:04:34.658 Project version: 0.0.1 00:04:34.658 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:34.658 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:34.658 Host machine cpu family: x86_64 00:04:34.658 Host machine cpu: x86_64 00:04:34.658 Run-time dependency threads found: YES 00:04:34.658 Library dl found: YES 00:04:34.658 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:34.658 Run-time dependency json-c found: YES 0.17 00:04:34.658 Run-time dependency cmocka found: YES 1.1.7 00:04:34.658 Program pytest-3 found: NO 00:04:34.658 Program flake8 found: NO 00:04:34.658 Program misspell-fixer found: NO 00:04:34.658 Program restructuredtext-lint found: NO 00:04:34.658 Program valgrind found: YES (/usr/bin/valgrind) 00:04:34.658 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:34.658 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:34.658 Compiler for C supports arguments -Wwrite-strings: YES 00:04:34.658 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:34.658 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:34.658 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:34.658 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:34.658 Build targets in project: 8 00:04:34.658 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:34.658 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:34.658 00:04:34.658 libvfio-user 0.0.1 00:04:34.658 00:04:34.658 User defined options 00:04:34.658 buildtype : debug 00:04:34.658 default_library: shared 00:04:34.658 libdir : /usr/local/lib 00:04:34.658 00:04:34.658 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:34.917 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:35.176 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:35.176 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:35.176 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:35.176 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:35.176 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:35.176 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:35.176 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:35.176 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:35.176 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:35.176 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:35.434 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:35.434 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:35.434 [13/37] Compiling C object samples/client.p/client.c.o 00:04:35.434 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:35.434 [15/37] Compiling C object samples/null.p/null.c.o 00:04:35.434 [16/37] Compiling C object samples/server.p/server.c.o 00:04:35.434 [17/37] Linking target samples/client 00:04:35.434 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:35.434 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:35.434 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:35.434 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:35.434 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:35.434 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:35.434 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:35.434 [25/37] Linking target lib/libvfio-user.so.0.0.1 00:04:35.435 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:35.435 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:35.435 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:35.435 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:35.693 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:35.693 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:35.693 [32/37] Linking target samples/gpio-pci-idio-16 00:04:35.693 [33/37] Linking target samples/server 00:04:35.693 [34/37] Linking target samples/null 00:04:35.693 [35/37] Linking target samples/lspci 00:04:35.693 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:35.693 [37/37] Linking target test/unit_tests 00:04:35.693 INFO: autodetecting backend as ninja 00:04:35.693 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:35.693 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:36.260 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:36.260 ninja: no work to do. 00:05:32.502 CC lib/log/log.o 00:05:32.502 CC lib/log/log_flags.o 00:05:32.502 CC lib/log/log_deprecated.o 00:05:32.502 CC lib/ut/ut.o 00:05:32.502 CC lib/ut_mock/mock.o 00:05:32.502 LIB libspdk_ut.a 00:05:32.502 LIB libspdk_log.a 00:05:32.502 LIB libspdk_ut_mock.a 00:05:32.502 SO libspdk_ut.so.2.0 00:05:32.502 SO libspdk_log.so.7.0 00:05:32.502 SO libspdk_ut_mock.so.6.0 00:05:32.502 SYMLINK libspdk_log.so 00:05:32.502 SYMLINK libspdk_ut.so 00:05:32.502 SYMLINK libspdk_ut_mock.so 00:05:32.502 CC lib/ioat/ioat.o 00:05:32.502 CXX lib/trace_parser/trace.o 00:05:32.502 CC lib/util/base64.o 00:05:32.502 CC lib/util/bit_array.o 00:05:32.502 CC lib/dma/dma.o 00:05:32.502 CC lib/util/crc16.o 00:05:32.502 CC lib/util/crc32.o 00:05:32.502 CC lib/util/cpuset.o 00:05:32.502 CC lib/util/crc32c.o 00:05:32.502 CC lib/vfio_user/host/vfio_user_pci.o 00:05:32.502 CC lib/util/crc32_ieee.o 00:05:32.502 CC lib/util/crc64.o 00:05:32.502 CC lib/util/dif.o 00:05:32.502 CC lib/util/fd.o 00:05:32.502 LIB libspdk_dma.a 00:05:32.502 CC lib/util/fd_group.o 00:05:32.502 SO libspdk_dma.so.5.0 00:05:32.502 CC lib/vfio_user/host/vfio_user.o 00:05:32.502 SYMLINK libspdk_dma.so 00:05:32.502 CC lib/util/file.o 00:05:32.502 CC lib/util/hexlify.o 00:05:32.502 CC lib/util/iov.o 00:05:32.502 LIB libspdk_ioat.a 00:05:32.502 CC lib/util/math.o 00:05:32.502 SO libspdk_ioat.so.7.0 00:05:32.502 CC lib/util/net.o 00:05:32.502 SYMLINK libspdk_ioat.so 00:05:32.502 CC lib/util/pipe.o 00:05:32.502 CC lib/util/strerror_tls.o 00:05:32.502 CC lib/util/string.o 00:05:32.502 LIB libspdk_vfio_user.a 00:05:32.502 CC lib/util/uuid.o 00:05:32.502 SO libspdk_vfio_user.so.5.0 00:05:32.502 CC lib/util/xor.o 00:05:32.502 CC lib/util/zipf.o 00:05:32.502 SYMLINK libspdk_vfio_user.so 00:05:32.502 CC lib/util/md5.o 00:05:32.502 LIB libspdk_util.a 00:05:32.502 SO libspdk_util.so.10.0 00:05:32.502 SYMLINK libspdk_util.so 00:05:32.502 LIB libspdk_trace_parser.a 00:05:32.502 SO libspdk_trace_parser.so.6.0 00:05:32.502 SYMLINK libspdk_trace_parser.so 00:05:32.502 CC lib/conf/conf.o 00:05:32.502 CC lib/rdma_provider/common.o 00:05:32.502 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:32.502 CC lib/rdma_utils/rdma_utils.o 00:05:32.502 CC lib/env_dpdk/env.o 00:05:32.502 CC lib/idxd/idxd.o 00:05:32.502 CC lib/env_dpdk/memory.o 00:05:32.502 CC lib/json/json_util.o 00:05:32.502 CC lib/json/json_parse.o 00:05:32.502 CC lib/vmd/vmd.o 00:05:32.502 CC lib/json/json_write.o 00:05:32.502 LIB libspdk_rdma_provider.a 00:05:32.502 SO libspdk_rdma_provider.so.6.0 00:05:32.502 LIB libspdk_conf.a 00:05:32.502 CC lib/idxd/idxd_user.o 00:05:32.502 CC lib/idxd/idxd_kernel.o 00:05:32.502 SO libspdk_conf.so.6.0 00:05:32.502 LIB libspdk_rdma_utils.a 00:05:32.502 SYMLINK libspdk_rdma_provider.so 00:05:32.502 CC lib/env_dpdk/pci.o 00:05:32.502 SO libspdk_rdma_utils.so.1.0 00:05:32.502 SYMLINK libspdk_conf.so 00:05:32.502 CC lib/env_dpdk/init.o 00:05:32.502 SYMLINK libspdk_rdma_utils.so 00:05:32.502 CC lib/env_dpdk/threads.o 00:05:32.502 CC lib/env_dpdk/pci_ioat.o 00:05:32.502 LIB libspdk_json.a 00:05:32.502 CC lib/env_dpdk/pci_virtio.o 00:05:32.502 SO libspdk_json.so.6.0 00:05:32.502 CC lib/env_dpdk/pci_vmd.o 00:05:32.502 CC lib/vmd/led.o 00:05:32.502 SYMLINK libspdk_json.so 00:05:32.502 CC lib/env_dpdk/pci_idxd.o 00:05:32.502 LIB libspdk_idxd.a 00:05:32.502 SO libspdk_idxd.so.12.1 00:05:32.502 CC lib/env_dpdk/pci_event.o 00:05:32.502 CC lib/env_dpdk/sigbus_handler.o 00:05:32.502 CC lib/env_dpdk/pci_dpdk.o 00:05:32.502 SYMLINK libspdk_idxd.so 00:05:32.502 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:32.502 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:32.502 LIB libspdk_vmd.a 00:05:32.502 SO libspdk_vmd.so.6.0 00:05:32.502 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:32.502 CC lib/jsonrpc/jsonrpc_server.o 00:05:32.502 CC lib/jsonrpc/jsonrpc_client.o 00:05:32.502 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:32.502 SYMLINK libspdk_vmd.so 00:05:32.502 LIB libspdk_jsonrpc.a 00:05:32.502 SO libspdk_jsonrpc.so.6.0 00:05:32.502 SYMLINK libspdk_jsonrpc.so 00:05:32.502 LIB libspdk_env_dpdk.a 00:05:32.502 CC lib/rpc/rpc.o 00:05:32.502 SO libspdk_env_dpdk.so.15.0 00:05:32.502 SYMLINK libspdk_env_dpdk.so 00:05:32.502 LIB libspdk_rpc.a 00:05:32.502 SO libspdk_rpc.so.6.0 00:05:32.502 SYMLINK libspdk_rpc.so 00:05:32.502 CC lib/notify/notify.o 00:05:32.502 CC lib/notify/notify_rpc.o 00:05:32.502 CC lib/keyring/keyring.o 00:05:32.502 CC lib/keyring/keyring_rpc.o 00:05:32.502 CC lib/trace/trace.o 00:05:32.502 CC lib/trace/trace_flags.o 00:05:32.502 CC lib/trace/trace_rpc.o 00:05:32.502 LIB libspdk_notify.a 00:05:32.502 SO libspdk_notify.so.6.0 00:05:32.502 LIB libspdk_trace.a 00:05:32.502 SYMLINK libspdk_notify.so 00:05:32.502 LIB libspdk_keyring.a 00:05:32.502 SO libspdk_trace.so.11.0 00:05:32.502 SO libspdk_keyring.so.2.0 00:05:32.502 SYMLINK libspdk_keyring.so 00:05:32.502 SYMLINK libspdk_trace.so 00:05:32.502 CC lib/sock/sock.o 00:05:32.502 CC lib/sock/sock_rpc.o 00:05:32.502 CC lib/thread/thread.o 00:05:32.502 CC lib/thread/iobuf.o 00:05:32.502 LIB libspdk_sock.a 00:05:32.502 SO libspdk_sock.so.10.0 00:05:32.502 SYMLINK libspdk_sock.so 00:05:32.502 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:32.502 CC lib/nvme/nvme_ctrlr.o 00:05:32.502 CC lib/nvme/nvme_fabric.o 00:05:32.502 CC lib/nvme/nvme_ns_cmd.o 00:05:32.502 CC lib/nvme/nvme_pcie_common.o 00:05:32.502 CC lib/nvme/nvme_ns.o 00:05:32.502 CC lib/nvme/nvme_pcie.o 00:05:32.502 CC lib/nvme/nvme_qpair.o 00:05:32.502 CC lib/nvme/nvme.o 00:05:32.761 LIB libspdk_thread.a 00:05:32.761 SO libspdk_thread.so.10.1 00:05:32.761 CC lib/nvme/nvme_quirks.o 00:05:32.761 CC lib/nvme/nvme_transport.o 00:05:33.020 SYMLINK libspdk_thread.so 00:05:33.020 CC lib/accel/accel.o 00:05:33.020 CC lib/blob/blobstore.o 00:05:33.020 CC lib/blob/request.o 00:05:33.278 CC lib/init/json_config.o 00:05:33.279 CC lib/virtio/virtio.o 00:05:33.279 CC lib/vfu_tgt/tgt_endpoint.o 00:05:33.279 CC lib/vfu_tgt/tgt_rpc.o 00:05:33.279 CC lib/init/subsystem.o 00:05:33.537 CC lib/init/subsystem_rpc.o 00:05:33.537 CC lib/init/rpc.o 00:05:33.537 CC lib/nvme/nvme_discovery.o 00:05:33.537 CC lib/virtio/virtio_vhost_user.o 00:05:33.537 LIB libspdk_vfu_tgt.a 00:05:33.537 CC lib/accel/accel_rpc.o 00:05:33.537 SO libspdk_vfu_tgt.so.3.0 00:05:33.537 CC lib/accel/accel_sw.o 00:05:33.537 LIB libspdk_init.a 00:05:33.537 SYMLINK libspdk_vfu_tgt.so 00:05:33.796 SO libspdk_init.so.6.0 00:05:33.796 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:33.796 SYMLINK libspdk_init.so 00:05:33.796 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:33.796 CC lib/fsdev/fsdev.o 00:05:33.796 CC lib/virtio/virtio_vfio_user.o 00:05:34.054 CC lib/fsdev/fsdev_io.o 00:05:34.054 CC lib/event/app.o 00:05:34.054 CC lib/virtio/virtio_pci.o 00:05:34.054 CC lib/event/reactor.o 00:05:34.313 LIB libspdk_accel.a 00:05:34.313 SO libspdk_accel.so.16.0 00:05:34.313 CC lib/fsdev/fsdev_rpc.o 00:05:34.313 SYMLINK libspdk_accel.so 00:05:34.313 CC lib/event/log_rpc.o 00:05:34.313 CC lib/blob/zeroes.o 00:05:34.313 CC lib/blob/blob_bs_dev.o 00:05:34.313 CC lib/nvme/nvme_tcp.o 00:05:34.313 LIB libspdk_virtio.a 00:05:34.313 SO libspdk_virtio.so.7.0 00:05:34.572 LIB libspdk_fsdev.a 00:05:34.572 CC lib/nvme/nvme_opal.o 00:05:34.572 CC lib/nvme/nvme_io_msg.o 00:05:34.572 SYMLINK libspdk_virtio.so 00:05:34.572 CC lib/event/app_rpc.o 00:05:34.572 CC lib/event/scheduler_static.o 00:05:34.572 SO libspdk_fsdev.so.1.0 00:05:34.572 SYMLINK libspdk_fsdev.so 00:05:34.572 CC lib/nvme/nvme_poll_group.o 00:05:34.572 CC lib/nvme/nvme_zns.o 00:05:34.572 CC lib/nvme/nvme_stubs.o 00:05:34.572 CC lib/bdev/bdev.o 00:05:34.572 CC lib/bdev/bdev_rpc.o 00:05:34.830 LIB libspdk_event.a 00:05:34.830 SO libspdk_event.so.14.0 00:05:34.830 SYMLINK libspdk_event.so 00:05:34.830 CC lib/bdev/bdev_zone.o 00:05:35.089 CC lib/bdev/part.o 00:05:35.089 CC lib/bdev/scsi_nvme.o 00:05:35.089 CC lib/nvme/nvme_auth.o 00:05:35.089 CC lib/nvme/nvme_cuse.o 00:05:35.348 CC lib/nvme/nvme_vfio_user.o 00:05:35.348 CC lib/nvme/nvme_rdma.o 00:05:35.348 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:35.914 LIB libspdk_fuse_dispatcher.a 00:05:35.914 SO libspdk_fuse_dispatcher.so.1.0 00:05:36.173 SYMLINK libspdk_fuse_dispatcher.so 00:05:36.173 LIB libspdk_blob.a 00:05:36.431 SO libspdk_blob.so.11.0 00:05:36.431 SYMLINK libspdk_blob.so 00:05:36.689 LIB libspdk_nvme.a 00:05:36.689 CC lib/blobfs/tree.o 00:05:36.689 CC lib/blobfs/blobfs.o 00:05:36.689 CC lib/lvol/lvol.o 00:05:36.947 SO libspdk_nvme.so.14.0 00:05:37.206 SYMLINK libspdk_nvme.so 00:05:37.206 LIB libspdk_bdev.a 00:05:37.465 SO libspdk_bdev.so.16.0 00:05:37.465 LIB libspdk_blobfs.a 00:05:37.465 SYMLINK libspdk_bdev.so 00:05:37.465 SO libspdk_blobfs.so.10.0 00:05:37.465 LIB libspdk_lvol.a 00:05:37.465 SYMLINK libspdk_blobfs.so 00:05:37.723 SO libspdk_lvol.so.10.0 00:05:37.723 CC lib/ftl/ftl_core.o 00:05:37.723 CC lib/scsi/dev.o 00:05:37.723 CC lib/ftl/ftl_init.o 00:05:37.723 CC lib/nbd/nbd_rpc.o 00:05:37.723 CC lib/scsi/port.o 00:05:37.723 CC lib/scsi/lun.o 00:05:37.723 CC lib/nvmf/ctrlr.o 00:05:37.723 CC lib/nbd/nbd.o 00:05:37.723 SYMLINK libspdk_lvol.so 00:05:37.723 CC lib/ublk/ublk.o 00:05:37.723 CC lib/nvmf/ctrlr_discovery.o 00:05:37.723 CC lib/scsi/scsi.o 00:05:37.723 CC lib/scsi/scsi_bdev.o 00:05:37.982 CC lib/ftl/ftl_layout.o 00:05:37.982 CC lib/ftl/ftl_debug.o 00:05:37.982 CC lib/ftl/ftl_io.o 00:05:37.982 CC lib/ftl/ftl_sb.o 00:05:37.982 CC lib/ublk/ublk_rpc.o 00:05:38.241 LIB libspdk_nbd.a 00:05:38.241 SO libspdk_nbd.so.7.0 00:05:38.241 CC lib/nvmf/ctrlr_bdev.o 00:05:38.241 CC lib/nvmf/subsystem.o 00:05:38.241 CC lib/nvmf/nvmf.o 00:05:38.241 SYMLINK libspdk_nbd.so 00:05:38.241 CC lib/nvmf/nvmf_rpc.o 00:05:38.241 CC lib/ftl/ftl_l2p.o 00:05:38.241 CC lib/nvmf/transport.o 00:05:38.241 CC lib/ftl/ftl_l2p_flat.o 00:05:38.241 LIB libspdk_ublk.a 00:05:38.241 CC lib/scsi/scsi_pr.o 00:05:38.241 SO libspdk_ublk.so.3.0 00:05:38.500 SYMLINK libspdk_ublk.so 00:05:38.500 CC lib/ftl/ftl_nv_cache.o 00:05:38.500 CC lib/nvmf/tcp.o 00:05:38.500 CC lib/nvmf/stubs.o 00:05:38.771 CC lib/scsi/scsi_rpc.o 00:05:38.771 CC lib/scsi/task.o 00:05:38.771 CC lib/nvmf/mdns_server.o 00:05:39.044 CC lib/nvmf/vfio_user.o 00:05:39.044 CC lib/nvmf/rdma.o 00:05:39.044 LIB libspdk_scsi.a 00:05:39.044 CC lib/nvmf/auth.o 00:05:39.044 CC lib/ftl/ftl_band.o 00:05:39.044 SO libspdk_scsi.so.9.0 00:05:39.302 SYMLINK libspdk_scsi.so 00:05:39.302 CC lib/ftl/ftl_band_ops.o 00:05:39.302 CC lib/ftl/ftl_writer.o 00:05:39.560 CC lib/iscsi/conn.o 00:05:39.560 CC lib/ftl/ftl_rq.o 00:05:39.560 CC lib/vhost/vhost.o 00:05:39.560 CC lib/vhost/vhost_rpc.o 00:05:39.560 CC lib/ftl/ftl_reloc.o 00:05:39.560 CC lib/iscsi/init_grp.o 00:05:39.819 CC lib/ftl/ftl_l2p_cache.o 00:05:39.819 CC lib/vhost/vhost_scsi.o 00:05:39.819 CC lib/vhost/vhost_blk.o 00:05:39.819 CC lib/vhost/rte_vhost_user.o 00:05:40.081 CC lib/ftl/ftl_p2l.o 00:05:40.081 CC lib/iscsi/iscsi.o 00:05:40.342 CC lib/iscsi/param.o 00:05:40.342 CC lib/iscsi/portal_grp.o 00:05:40.342 CC lib/iscsi/tgt_node.o 00:05:40.342 CC lib/ftl/ftl_p2l_log.o 00:05:40.600 CC lib/iscsi/iscsi_subsystem.o 00:05:40.600 CC lib/iscsi/iscsi_rpc.o 00:05:40.600 CC lib/ftl/mngt/ftl_mngt.o 00:05:40.600 CC lib/iscsi/task.o 00:05:40.859 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:40.859 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:40.859 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:40.859 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:40.859 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:40.859 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:41.117 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:41.117 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:41.117 LIB libspdk_vhost.a 00:05:41.117 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:41.117 SO libspdk_vhost.so.8.0 00:05:41.117 LIB libspdk_nvmf.a 00:05:41.117 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:41.117 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:41.117 SYMLINK libspdk_vhost.so 00:05:41.117 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:41.117 SO libspdk_nvmf.so.19.0 00:05:41.376 CC lib/ftl/utils/ftl_conf.o 00:05:41.376 CC lib/ftl/utils/ftl_md.o 00:05:41.376 CC lib/ftl/utils/ftl_mempool.o 00:05:41.376 CC lib/ftl/utils/ftl_bitmap.o 00:05:41.376 CC lib/ftl/utils/ftl_property.o 00:05:41.376 SYMLINK libspdk_nvmf.so 00:05:41.376 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:41.376 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:41.376 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:41.376 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:41.635 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:41.635 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:41.635 LIB libspdk_iscsi.a 00:05:41.635 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:41.635 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:41.635 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:41.635 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:41.635 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:41.635 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:41.635 SO libspdk_iscsi.so.8.0 00:05:41.635 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:41.635 CC lib/ftl/base/ftl_base_dev.o 00:05:41.894 CC lib/ftl/base/ftl_base_bdev.o 00:05:41.894 CC lib/ftl/ftl_trace.o 00:05:41.894 SYMLINK libspdk_iscsi.so 00:05:42.153 LIB libspdk_ftl.a 00:05:42.412 SO libspdk_ftl.so.9.0 00:05:42.670 SYMLINK libspdk_ftl.so 00:05:42.929 CC module/env_dpdk/env_dpdk_rpc.o 00:05:42.929 CC module/vfu_device/vfu_virtio.o 00:05:42.929 CC module/sock/posix/posix.o 00:05:42.929 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:42.929 CC module/sock/uring/uring.o 00:05:42.929 CC module/blob/bdev/blob_bdev.o 00:05:42.929 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:42.929 CC module/keyring/file/keyring.o 00:05:42.929 CC module/accel/error/accel_error.o 00:05:42.929 CC module/fsdev/aio/fsdev_aio.o 00:05:43.188 LIB libspdk_env_dpdk_rpc.a 00:05:43.188 SO libspdk_env_dpdk_rpc.so.6.0 00:05:43.188 SYMLINK libspdk_env_dpdk_rpc.so 00:05:43.188 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:43.188 CC module/keyring/file/keyring_rpc.o 00:05:43.188 LIB libspdk_scheduler_dpdk_governor.a 00:05:43.188 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:43.188 CC module/accel/error/accel_error_rpc.o 00:05:43.188 LIB libspdk_scheduler_dynamic.a 00:05:43.188 SO libspdk_scheduler_dynamic.so.4.0 00:05:43.188 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:43.188 SYMLINK libspdk_scheduler_dynamic.so 00:05:43.446 CC module/fsdev/aio/linux_aio_mgr.o 00:05:43.446 LIB libspdk_blob_bdev.a 00:05:43.446 LIB libspdk_keyring_file.a 00:05:43.446 SO libspdk_blob_bdev.so.11.0 00:05:43.446 LIB libspdk_accel_error.a 00:05:43.446 SO libspdk_keyring_file.so.2.0 00:05:43.446 SO libspdk_accel_error.so.2.0 00:05:43.446 SYMLINK libspdk_blob_bdev.so 00:05:43.446 SYMLINK libspdk_keyring_file.so 00:05:43.446 SYMLINK libspdk_accel_error.so 00:05:43.446 CC module/scheduler/gscheduler/gscheduler.o 00:05:43.446 CC module/accel/ioat/accel_ioat.o 00:05:43.705 CC module/vfu_device/vfu_virtio_blk.o 00:05:43.705 CC module/accel/dsa/accel_dsa.o 00:05:43.705 CC module/keyring/linux/keyring.o 00:05:43.705 LIB libspdk_fsdev_aio.a 00:05:43.705 LIB libspdk_scheduler_gscheduler.a 00:05:43.705 SO libspdk_scheduler_gscheduler.so.4.0 00:05:43.705 SO libspdk_fsdev_aio.so.1.0 00:05:43.705 LIB libspdk_sock_uring.a 00:05:43.705 LIB libspdk_sock_posix.a 00:05:43.705 SO libspdk_sock_uring.so.5.0 00:05:43.705 CC module/accel/ioat/accel_ioat_rpc.o 00:05:43.705 CC module/bdev/delay/vbdev_delay.o 00:05:43.705 SO libspdk_sock_posix.so.6.0 00:05:43.705 SYMLINK libspdk_scheduler_gscheduler.so 00:05:43.705 CC module/vfu_device/vfu_virtio_scsi.o 00:05:43.705 SYMLINK libspdk_fsdev_aio.so 00:05:43.705 CC module/keyring/linux/keyring_rpc.o 00:05:43.705 CC module/blobfs/bdev/blobfs_bdev.o 00:05:43.963 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:43.963 SYMLINK libspdk_sock_uring.so 00:05:43.963 CC module/vfu_device/vfu_virtio_rpc.o 00:05:43.963 SYMLINK libspdk_sock_posix.so 00:05:43.963 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:43.963 CC module/vfu_device/vfu_virtio_fs.o 00:05:43.963 LIB libspdk_accel_ioat.a 00:05:43.963 SO libspdk_accel_ioat.so.6.0 00:05:43.963 LIB libspdk_keyring_linux.a 00:05:43.963 CC module/accel/dsa/accel_dsa_rpc.o 00:05:43.963 SO libspdk_keyring_linux.so.1.0 00:05:43.963 SYMLINK libspdk_accel_ioat.so 00:05:43.963 LIB libspdk_blobfs_bdev.a 00:05:43.963 SO libspdk_blobfs_bdev.so.6.0 00:05:43.963 SYMLINK libspdk_keyring_linux.so 00:05:44.222 LIB libspdk_accel_dsa.a 00:05:44.222 SYMLINK libspdk_blobfs_bdev.so 00:05:44.222 SO libspdk_accel_dsa.so.5.0 00:05:44.222 LIB libspdk_bdev_delay.a 00:05:44.222 LIB libspdk_vfu_device.a 00:05:44.222 CC module/accel/iaa/accel_iaa.o 00:05:44.222 CC module/bdev/error/vbdev_error.o 00:05:44.222 SO libspdk_bdev_delay.so.6.0 00:05:44.222 SO libspdk_vfu_device.so.3.0 00:05:44.222 CC module/bdev/gpt/gpt.o 00:05:44.222 SYMLINK libspdk_accel_dsa.so 00:05:44.222 CC module/bdev/lvol/vbdev_lvol.o 00:05:44.222 CC module/accel/iaa/accel_iaa_rpc.o 00:05:44.222 CC module/bdev/malloc/bdev_malloc.o 00:05:44.222 SYMLINK libspdk_bdev_delay.so 00:05:44.222 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:44.222 CC module/bdev/nvme/bdev_nvme.o 00:05:44.222 CC module/bdev/null/bdev_null.o 00:05:44.222 SYMLINK libspdk_vfu_device.so 00:05:44.222 CC module/bdev/gpt/vbdev_gpt.o 00:05:44.480 LIB libspdk_accel_iaa.a 00:05:44.480 CC module/bdev/error/vbdev_error_rpc.o 00:05:44.480 SO libspdk_accel_iaa.so.3.0 00:05:44.480 CC module/bdev/null/bdev_null_rpc.o 00:05:44.480 SYMLINK libspdk_accel_iaa.so 00:05:44.480 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:44.480 CC module/bdev/nvme/nvme_rpc.o 00:05:44.480 CC module/bdev/passthru/vbdev_passthru.o 00:05:44.480 LIB libspdk_bdev_gpt.a 00:05:44.739 LIB libspdk_bdev_error.a 00:05:44.739 SO libspdk_bdev_gpt.so.6.0 00:05:44.739 LIB libspdk_bdev_malloc.a 00:05:44.739 SO libspdk_bdev_error.so.6.0 00:05:44.739 CC module/bdev/raid/bdev_raid.o 00:05:44.739 SO libspdk_bdev_malloc.so.6.0 00:05:44.739 LIB libspdk_bdev_null.a 00:05:44.739 SYMLINK libspdk_bdev_gpt.so 00:05:44.739 SO libspdk_bdev_null.so.6.0 00:05:44.739 SYMLINK libspdk_bdev_error.so 00:05:44.739 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:44.739 SYMLINK libspdk_bdev_malloc.so 00:05:44.739 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:44.739 SYMLINK libspdk_bdev_null.so 00:05:44.739 CC module/bdev/nvme/bdev_mdns_client.o 00:05:44.739 CC module/bdev/nvme/vbdev_opal.o 00:05:44.997 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:44.997 CC module/bdev/split/vbdev_split.o 00:05:44.997 LIB libspdk_bdev_passthru.a 00:05:44.997 SO libspdk_bdev_passthru.so.6.0 00:05:44.997 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:44.997 SYMLINK libspdk_bdev_passthru.so 00:05:44.997 CC module/bdev/split/vbdev_split_rpc.o 00:05:45.256 CC module/bdev/raid/bdev_raid_rpc.o 00:05:45.256 CC module/bdev/uring/bdev_uring.o 00:05:45.256 CC module/bdev/raid/bdev_raid_sb.o 00:05:45.256 CC module/bdev/raid/raid0.o 00:05:45.256 LIB libspdk_bdev_lvol.a 00:05:45.256 SO libspdk_bdev_lvol.so.6.0 00:05:45.256 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:45.256 LIB libspdk_bdev_split.a 00:05:45.256 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:45.256 SO libspdk_bdev_split.so.6.0 00:05:45.256 SYMLINK libspdk_bdev_lvol.so 00:05:45.256 CC module/bdev/uring/bdev_uring_rpc.o 00:05:45.256 SYMLINK libspdk_bdev_split.so 00:05:45.515 CC module/bdev/raid/raid1.o 00:05:45.515 LIB libspdk_bdev_zone_block.a 00:05:45.515 CC module/bdev/raid/concat.o 00:05:45.515 SO libspdk_bdev_zone_block.so.6.0 00:05:45.515 CC module/bdev/aio/bdev_aio.o 00:05:45.515 SYMLINK libspdk_bdev_zone_block.so 00:05:45.515 LIB libspdk_bdev_uring.a 00:05:45.515 CC module/bdev/aio/bdev_aio_rpc.o 00:05:45.515 SO libspdk_bdev_uring.so.6.0 00:05:45.515 SYMLINK libspdk_bdev_uring.so 00:05:45.774 CC module/bdev/iscsi/bdev_iscsi.o 00:05:45.774 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:45.774 CC module/bdev/ftl/bdev_ftl.o 00:05:45.774 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:45.774 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:45.774 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:45.774 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:45.774 LIB libspdk_bdev_raid.a 00:05:45.774 SO libspdk_bdev_raid.so.6.0 00:05:45.774 SYMLINK libspdk_bdev_raid.so 00:05:45.774 LIB libspdk_bdev_aio.a 00:05:46.032 SO libspdk_bdev_aio.so.6.0 00:05:46.032 LIB libspdk_bdev_ftl.a 00:05:46.032 SYMLINK libspdk_bdev_aio.so 00:05:46.032 SO libspdk_bdev_ftl.so.6.0 00:05:46.032 LIB libspdk_bdev_iscsi.a 00:05:46.032 SYMLINK libspdk_bdev_ftl.so 00:05:46.032 SO libspdk_bdev_iscsi.so.6.0 00:05:46.032 SYMLINK libspdk_bdev_iscsi.so 00:05:46.291 LIB libspdk_bdev_virtio.a 00:05:46.291 SO libspdk_bdev_virtio.so.6.0 00:05:46.291 SYMLINK libspdk_bdev_virtio.so 00:05:46.550 LIB libspdk_bdev_nvme.a 00:05:46.550 SO libspdk_bdev_nvme.so.7.0 00:05:46.809 SYMLINK libspdk_bdev_nvme.so 00:05:47.068 CC module/event/subsystems/scheduler/scheduler.o 00:05:47.068 CC module/event/subsystems/fsdev/fsdev.o 00:05:47.068 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:47.068 CC module/event/subsystems/iobuf/iobuf.o 00:05:47.068 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:47.068 CC module/event/subsystems/vmd/vmd.o 00:05:47.068 CC module/event/subsystems/sock/sock.o 00:05:47.068 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:47.068 CC module/event/subsystems/keyring/keyring.o 00:05:47.068 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:47.326 LIB libspdk_event_fsdev.a 00:05:47.326 LIB libspdk_event_sock.a 00:05:47.326 LIB libspdk_event_iobuf.a 00:05:47.326 LIB libspdk_event_keyring.a 00:05:47.326 LIB libspdk_event_scheduler.a 00:05:47.326 LIB libspdk_event_vhost_blk.a 00:05:47.326 SO libspdk_event_sock.so.5.0 00:05:47.326 SO libspdk_event_fsdev.so.1.0 00:05:47.326 LIB libspdk_event_vmd.a 00:05:47.326 LIB libspdk_event_vfu_tgt.a 00:05:47.326 SO libspdk_event_keyring.so.1.0 00:05:47.326 SO libspdk_event_iobuf.so.3.0 00:05:47.326 SO libspdk_event_scheduler.so.4.0 00:05:47.326 SO libspdk_event_vhost_blk.so.3.0 00:05:47.326 SO libspdk_event_vfu_tgt.so.3.0 00:05:47.326 SO libspdk_event_vmd.so.6.0 00:05:47.326 SYMLINK libspdk_event_sock.so 00:05:47.326 SYMLINK libspdk_event_fsdev.so 00:05:47.326 SYMLINK libspdk_event_keyring.so 00:05:47.326 SYMLINK libspdk_event_scheduler.so 00:05:47.326 SYMLINK libspdk_event_vhost_blk.so 00:05:47.326 SYMLINK libspdk_event_vfu_tgt.so 00:05:47.326 SYMLINK libspdk_event_iobuf.so 00:05:47.326 SYMLINK libspdk_event_vmd.so 00:05:47.585 CC module/event/subsystems/accel/accel.o 00:05:47.843 LIB libspdk_event_accel.a 00:05:47.843 SO libspdk_event_accel.so.6.0 00:05:47.843 SYMLINK libspdk_event_accel.so 00:05:48.102 CC module/event/subsystems/bdev/bdev.o 00:05:48.362 LIB libspdk_event_bdev.a 00:05:48.362 SO libspdk_event_bdev.so.6.0 00:05:48.622 SYMLINK libspdk_event_bdev.so 00:05:48.622 CC module/event/subsystems/nbd/nbd.o 00:05:48.622 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:48.622 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:48.622 CC module/event/subsystems/scsi/scsi.o 00:05:48.622 CC module/event/subsystems/ublk/ublk.o 00:05:48.881 LIB libspdk_event_nbd.a 00:05:48.881 LIB libspdk_event_ublk.a 00:05:48.881 LIB libspdk_event_scsi.a 00:05:48.881 SO libspdk_event_nbd.so.6.0 00:05:48.881 SO libspdk_event_ublk.so.3.0 00:05:48.881 SO libspdk_event_scsi.so.6.0 00:05:49.140 SYMLINK libspdk_event_ublk.so 00:05:49.140 SYMLINK libspdk_event_nbd.so 00:05:49.140 LIB libspdk_event_nvmf.a 00:05:49.140 SYMLINK libspdk_event_scsi.so 00:05:49.140 SO libspdk_event_nvmf.so.6.0 00:05:49.140 SYMLINK libspdk_event_nvmf.so 00:05:49.140 CC module/event/subsystems/iscsi/iscsi.o 00:05:49.398 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:49.398 LIB libspdk_event_vhost_scsi.a 00:05:49.398 LIB libspdk_event_iscsi.a 00:05:49.398 SO libspdk_event_vhost_scsi.so.3.0 00:05:49.398 SO libspdk_event_iscsi.so.6.0 00:05:49.657 SYMLINK libspdk_event_vhost_scsi.so 00:05:49.657 SYMLINK libspdk_event_iscsi.so 00:05:49.657 SO libspdk.so.6.0 00:05:49.657 SYMLINK libspdk.so 00:05:49.916 TEST_HEADER include/spdk/accel.h 00:05:49.916 TEST_HEADER include/spdk/accel_module.h 00:05:49.916 CC app/trace_record/trace_record.o 00:05:49.916 CXX app/trace/trace.o 00:05:49.916 TEST_HEADER include/spdk/assert.h 00:05:49.916 TEST_HEADER include/spdk/barrier.h 00:05:49.916 TEST_HEADER include/spdk/base64.h 00:05:49.916 TEST_HEADER include/spdk/bdev.h 00:05:49.916 TEST_HEADER include/spdk/bdev_module.h 00:05:49.916 TEST_HEADER include/spdk/bdev_zone.h 00:05:49.916 TEST_HEADER include/spdk/bit_array.h 00:05:49.916 TEST_HEADER include/spdk/bit_pool.h 00:05:49.916 TEST_HEADER include/spdk/blob_bdev.h 00:05:49.916 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:49.916 TEST_HEADER include/spdk/blobfs.h 00:05:49.916 TEST_HEADER include/spdk/blob.h 00:05:49.916 TEST_HEADER include/spdk/conf.h 00:05:49.916 TEST_HEADER include/spdk/config.h 00:05:49.916 TEST_HEADER include/spdk/cpuset.h 00:05:49.916 TEST_HEADER include/spdk/crc16.h 00:05:49.916 TEST_HEADER include/spdk/crc32.h 00:05:49.916 TEST_HEADER include/spdk/crc64.h 00:05:49.916 TEST_HEADER include/spdk/dif.h 00:05:49.916 TEST_HEADER include/spdk/dma.h 00:05:49.916 TEST_HEADER include/spdk/endian.h 00:05:49.916 TEST_HEADER include/spdk/env_dpdk.h 00:05:49.916 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:49.916 TEST_HEADER include/spdk/env.h 00:05:49.916 TEST_HEADER include/spdk/event.h 00:05:49.916 CC app/nvmf_tgt/nvmf_main.o 00:05:49.916 TEST_HEADER include/spdk/fd_group.h 00:05:50.174 TEST_HEADER include/spdk/fd.h 00:05:50.174 TEST_HEADER include/spdk/file.h 00:05:50.174 TEST_HEADER include/spdk/fsdev.h 00:05:50.174 TEST_HEADER include/spdk/fsdev_module.h 00:05:50.174 TEST_HEADER include/spdk/ftl.h 00:05:50.174 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:50.174 TEST_HEADER include/spdk/gpt_spec.h 00:05:50.174 TEST_HEADER include/spdk/hexlify.h 00:05:50.174 TEST_HEADER include/spdk/histogram_data.h 00:05:50.174 TEST_HEADER include/spdk/idxd.h 00:05:50.174 TEST_HEADER include/spdk/idxd_spec.h 00:05:50.174 CC test/thread/poller_perf/poller_perf.o 00:05:50.174 TEST_HEADER include/spdk/init.h 00:05:50.174 CC examples/util/zipf/zipf.o 00:05:50.174 TEST_HEADER include/spdk/ioat.h 00:05:50.174 TEST_HEADER include/spdk/ioat_spec.h 00:05:50.174 TEST_HEADER include/spdk/iscsi_spec.h 00:05:50.174 TEST_HEADER include/spdk/json.h 00:05:50.174 TEST_HEADER include/spdk/jsonrpc.h 00:05:50.174 CC examples/ioat/perf/perf.o 00:05:50.174 TEST_HEADER include/spdk/keyring.h 00:05:50.174 TEST_HEADER include/spdk/keyring_module.h 00:05:50.174 TEST_HEADER include/spdk/likely.h 00:05:50.174 TEST_HEADER include/spdk/log.h 00:05:50.174 TEST_HEADER include/spdk/lvol.h 00:05:50.174 TEST_HEADER include/spdk/md5.h 00:05:50.174 TEST_HEADER include/spdk/memory.h 00:05:50.174 TEST_HEADER include/spdk/mmio.h 00:05:50.174 TEST_HEADER include/spdk/nbd.h 00:05:50.174 TEST_HEADER include/spdk/net.h 00:05:50.174 TEST_HEADER include/spdk/notify.h 00:05:50.174 TEST_HEADER include/spdk/nvme.h 00:05:50.174 CC test/dma/test_dma/test_dma.o 00:05:50.174 TEST_HEADER include/spdk/nvme_intel.h 00:05:50.174 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:50.174 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:50.174 TEST_HEADER include/spdk/nvme_spec.h 00:05:50.174 TEST_HEADER include/spdk/nvme_zns.h 00:05:50.174 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:50.174 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:50.174 TEST_HEADER include/spdk/nvmf.h 00:05:50.174 TEST_HEADER include/spdk/nvmf_spec.h 00:05:50.174 TEST_HEADER include/spdk/nvmf_transport.h 00:05:50.174 TEST_HEADER include/spdk/opal.h 00:05:50.174 CC test/app/bdev_svc/bdev_svc.o 00:05:50.174 TEST_HEADER include/spdk/opal_spec.h 00:05:50.174 TEST_HEADER include/spdk/pci_ids.h 00:05:50.174 TEST_HEADER include/spdk/pipe.h 00:05:50.174 TEST_HEADER include/spdk/queue.h 00:05:50.174 TEST_HEADER include/spdk/reduce.h 00:05:50.174 TEST_HEADER include/spdk/rpc.h 00:05:50.174 TEST_HEADER include/spdk/scheduler.h 00:05:50.174 TEST_HEADER include/spdk/scsi.h 00:05:50.174 TEST_HEADER include/spdk/scsi_spec.h 00:05:50.174 TEST_HEADER include/spdk/sock.h 00:05:50.174 TEST_HEADER include/spdk/stdinc.h 00:05:50.174 TEST_HEADER include/spdk/string.h 00:05:50.174 TEST_HEADER include/spdk/thread.h 00:05:50.174 TEST_HEADER include/spdk/trace.h 00:05:50.174 TEST_HEADER include/spdk/trace_parser.h 00:05:50.174 TEST_HEADER include/spdk/tree.h 00:05:50.174 TEST_HEADER include/spdk/ublk.h 00:05:50.174 TEST_HEADER include/spdk/util.h 00:05:50.174 TEST_HEADER include/spdk/uuid.h 00:05:50.174 TEST_HEADER include/spdk/version.h 00:05:50.174 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:50.174 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:50.174 TEST_HEADER include/spdk/vhost.h 00:05:50.174 TEST_HEADER include/spdk/vmd.h 00:05:50.174 TEST_HEADER include/spdk/xor.h 00:05:50.174 TEST_HEADER include/spdk/zipf.h 00:05:50.174 CXX test/cpp_headers/accel.o 00:05:50.174 LINK poller_perf 00:05:50.174 LINK zipf 00:05:50.174 LINK interrupt_tgt 00:05:50.174 LINK nvmf_tgt 00:05:50.451 LINK spdk_trace_record 00:05:50.451 LINK ioat_perf 00:05:50.451 LINK bdev_svc 00:05:50.451 CXX test/cpp_headers/accel_module.o 00:05:50.451 LINK spdk_trace 00:05:50.451 CC test/rpc_client/rpc_client_test.o 00:05:50.709 CXX test/cpp_headers/assert.o 00:05:50.709 CC test/app/histogram_perf/histogram_perf.o 00:05:50.709 CC examples/ioat/verify/verify.o 00:05:50.709 CC test/event/event_perf/event_perf.o 00:05:50.709 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:50.709 CC test/env/mem_callbacks/mem_callbacks.o 00:05:50.709 CC test/env/vtophys/vtophys.o 00:05:50.709 LINK test_dma 00:05:50.709 LINK histogram_perf 00:05:50.709 CXX test/cpp_headers/barrier.o 00:05:50.709 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.709 LINK event_perf 00:05:50.709 LINK rpc_client_test 00:05:50.709 LINK verify 00:05:50.968 LINK vtophys 00:05:50.968 CXX test/cpp_headers/base64.o 00:05:50.968 CXX test/cpp_headers/bdev.o 00:05:50.968 CC test/event/reactor/reactor.o 00:05:50.968 LINK iscsi_tgt 00:05:50.968 CC app/spdk_lspci/spdk_lspci.o 00:05:50.968 LINK nvme_fuzz 00:05:51.227 CC app/spdk_tgt/spdk_tgt.o 00:05:51.227 CXX test/cpp_headers/bdev_module.o 00:05:51.227 LINK reactor 00:05:51.227 CC examples/thread/thread/thread_ex.o 00:05:51.227 CC examples/sock/hello_world/hello_sock.o 00:05:51.227 LINK spdk_lspci 00:05:51.227 CC examples/vmd/lsvmd/lsvmd.o 00:05:51.227 CC examples/vmd/led/led.o 00:05:51.227 LINK mem_callbacks 00:05:51.227 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:51.485 CXX test/cpp_headers/bdev_zone.o 00:05:51.485 LINK spdk_tgt 00:05:51.485 CC test/event/reactor_perf/reactor_perf.o 00:05:51.485 LINK lsvmd 00:05:51.485 CC app/spdk_nvme_perf/perf.o 00:05:51.485 LINK hello_sock 00:05:51.485 LINK led 00:05:51.485 LINK thread 00:05:51.485 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:51.485 LINK reactor_perf 00:05:51.485 CXX test/cpp_headers/bit_array.o 00:05:51.743 CC app/spdk_nvme_identify/identify.o 00:05:51.743 CC test/app/jsoncat/jsoncat.o 00:05:51.743 CC test/app/stub/stub.o 00:05:51.743 LINK env_dpdk_post_init 00:05:51.743 CXX test/cpp_headers/bit_pool.o 00:05:51.743 CC test/event/app_repeat/app_repeat.o 00:05:51.743 CC app/spdk_nvme_discover/discovery_aer.o 00:05:51.743 LINK jsoncat 00:05:51.743 CC examples/idxd/perf/perf.o 00:05:52.001 LINK stub 00:05:52.001 CXX test/cpp_headers/blob_bdev.o 00:05:52.001 LINK app_repeat 00:05:52.001 CC test/env/memory/memory_ut.o 00:05:52.001 CC test/env/pci/pci_ut.o 00:05:52.001 LINK spdk_nvme_discover 00:05:52.260 CXX test/cpp_headers/blobfs_bdev.o 00:05:52.260 LINK idxd_perf 00:05:52.260 CC test/event/scheduler/scheduler.o 00:05:52.260 LINK spdk_nvme_perf 00:05:52.260 CC test/accel/dif/dif.o 00:05:52.260 CXX test/cpp_headers/blobfs.o 00:05:52.518 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:52.518 CC app/spdk_top/spdk_top.o 00:05:52.518 LINK pci_ut 00:05:52.518 LINK spdk_nvme_identify 00:05:52.518 CXX test/cpp_headers/blob.o 00:05:52.518 LINK scheduler 00:05:52.518 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:52.518 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:52.776 CXX test/cpp_headers/conf.o 00:05:52.776 CXX test/cpp_headers/config.o 00:05:52.776 CXX test/cpp_headers/cpuset.o 00:05:52.776 LINK hello_fsdev 00:05:52.776 CXX test/cpp_headers/crc16.o 00:05:53.035 CC examples/accel/perf/accel_perf.o 00:05:53.035 LINK iscsi_fuzz 00:05:53.035 LINK dif 00:05:53.035 CXX test/cpp_headers/crc32.o 00:05:53.035 CC test/blobfs/mkfs/mkfs.o 00:05:53.035 LINK vhost_fuzz 00:05:53.035 CC test/nvme/aer/aer.o 00:05:53.294 CC test/lvol/esnap/esnap.o 00:05:53.294 LINK memory_ut 00:05:53.294 CXX test/cpp_headers/crc64.o 00:05:53.294 CXX test/cpp_headers/dif.o 00:05:53.294 LINK mkfs 00:05:53.294 LINK spdk_top 00:05:53.294 CXX test/cpp_headers/dma.o 00:05:53.552 CC test/bdev/bdevio/bdevio.o 00:05:53.552 CC examples/blob/hello_world/hello_blob.o 00:05:53.552 LINK accel_perf 00:05:53.552 LINK aer 00:05:53.552 CXX test/cpp_headers/endian.o 00:05:53.552 CC test/nvme/reset/reset.o 00:05:53.552 CXX test/cpp_headers/env_dpdk.o 00:05:53.552 CC app/vhost/vhost.o 00:05:53.552 CXX test/cpp_headers/env.o 00:05:53.811 CC test/nvme/sgl/sgl.o 00:05:53.811 LINK hello_blob 00:05:53.811 CC test/nvme/e2edp/nvme_dp.o 00:05:53.811 CC test/nvme/overhead/overhead.o 00:05:53.811 LINK vhost 00:05:53.811 CC test/nvme/err_injection/err_injection.o 00:05:53.811 LINK reset 00:05:53.811 CXX test/cpp_headers/event.o 00:05:53.811 LINK bdevio 00:05:54.069 LINK sgl 00:05:54.069 CXX test/cpp_headers/fd_group.o 00:05:54.069 LINK err_injection 00:05:54.069 CC test/nvme/startup/startup.o 00:05:54.069 LINK nvme_dp 00:05:54.069 CC examples/blob/cli/blobcli.o 00:05:54.069 CXX test/cpp_headers/fd.o 00:05:54.069 LINK overhead 00:05:54.069 CC app/spdk_dd/spdk_dd.o 00:05:54.328 CXX test/cpp_headers/file.o 00:05:54.328 LINK startup 00:05:54.328 CC test/nvme/reserve/reserve.o 00:05:54.328 CC test/nvme/simple_copy/simple_copy.o 00:05:54.328 CC test/nvme/connect_stress/connect_stress.o 00:05:54.328 CC examples/nvme/hello_world/hello_world.o 00:05:54.328 CXX test/cpp_headers/fsdev.o 00:05:54.328 CC examples/bdev/hello_world/hello_bdev.o 00:05:54.587 LINK reserve 00:05:54.587 LINK connect_stress 00:05:54.587 LINK simple_copy 00:05:54.587 LINK blobcli 00:05:54.587 CC app/fio/nvme/fio_plugin.o 00:05:54.587 LINK spdk_dd 00:05:54.587 CXX test/cpp_headers/fsdev_module.o 00:05:54.587 LINK hello_world 00:05:54.587 LINK hello_bdev 00:05:54.587 CXX test/cpp_headers/ftl.o 00:05:54.587 CXX test/cpp_headers/fuse_dispatcher.o 00:05:54.846 CXX test/cpp_headers/gpt_spec.o 00:05:54.846 CC test/nvme/boot_partition/boot_partition.o 00:05:54.846 CC examples/nvme/reconnect/reconnect.o 00:05:54.846 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:54.846 CC examples/nvme/arbitration/arbitration.o 00:05:54.846 CXX test/cpp_headers/hexlify.o 00:05:54.846 CC examples/nvme/hotplug/hotplug.o 00:05:54.846 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:54.846 LINK boot_partition 00:05:55.105 CC examples/bdev/bdevperf/bdevperf.o 00:05:55.105 CXX test/cpp_headers/histogram_data.o 00:05:55.105 LINK spdk_nvme 00:05:55.105 LINK cmb_copy 00:05:55.105 CC test/nvme/compliance/nvme_compliance.o 00:05:55.105 LINK reconnect 00:05:55.105 LINK hotplug 00:05:55.364 LINK arbitration 00:05:55.364 CXX test/cpp_headers/idxd.o 00:05:55.364 CXX test/cpp_headers/idxd_spec.o 00:05:55.364 CC app/fio/bdev/fio_plugin.o 00:05:55.364 LINK nvme_manage 00:05:55.364 CXX test/cpp_headers/init.o 00:05:55.364 CC examples/nvme/abort/abort.o 00:05:55.622 CXX test/cpp_headers/ioat.o 00:05:55.622 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:55.622 CC test/nvme/fused_ordering/fused_ordering.o 00:05:55.622 LINK nvme_compliance 00:05:55.622 CXX test/cpp_headers/ioat_spec.o 00:05:55.622 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:55.622 CXX test/cpp_headers/iscsi_spec.o 00:05:55.622 LINK pmr_persistence 00:05:55.881 CXX test/cpp_headers/json.o 00:05:55.881 LINK fused_ordering 00:05:55.881 LINK bdevperf 00:05:55.881 CC test/nvme/fdp/fdp.o 00:05:55.881 CXX test/cpp_headers/jsonrpc.o 00:05:55.881 LINK doorbell_aers 00:05:55.881 CXX test/cpp_headers/keyring.o 00:05:55.881 LINK abort 00:05:55.881 LINK spdk_bdev 00:05:55.881 CXX test/cpp_headers/keyring_module.o 00:05:55.881 CXX test/cpp_headers/likely.o 00:05:55.881 CXX test/cpp_headers/log.o 00:05:56.139 CC test/nvme/cuse/cuse.o 00:05:56.139 CXX test/cpp_headers/lvol.o 00:05:56.139 CXX test/cpp_headers/md5.o 00:05:56.139 CXX test/cpp_headers/memory.o 00:05:56.139 CXX test/cpp_headers/mmio.o 00:05:56.139 LINK fdp 00:05:56.139 CXX test/cpp_headers/nbd.o 00:05:56.139 CXX test/cpp_headers/net.o 00:05:56.139 CXX test/cpp_headers/notify.o 00:05:56.139 CXX test/cpp_headers/nvme.o 00:05:56.139 CXX test/cpp_headers/nvme_intel.o 00:05:56.139 CC examples/nvmf/nvmf/nvmf.o 00:05:56.398 CXX test/cpp_headers/nvme_ocssd.o 00:05:56.398 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:56.398 CXX test/cpp_headers/nvme_spec.o 00:05:56.398 CXX test/cpp_headers/nvme_zns.o 00:05:56.398 CXX test/cpp_headers/nvmf_cmd.o 00:05:56.398 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:56.398 CXX test/cpp_headers/nvmf.o 00:05:56.398 CXX test/cpp_headers/nvmf_spec.o 00:05:56.398 CXX test/cpp_headers/nvmf_transport.o 00:05:56.398 CXX test/cpp_headers/opal.o 00:05:56.657 CXX test/cpp_headers/opal_spec.o 00:05:56.657 LINK nvmf 00:05:56.657 CXX test/cpp_headers/pci_ids.o 00:05:56.657 CXX test/cpp_headers/pipe.o 00:05:56.657 CXX test/cpp_headers/queue.o 00:05:56.657 CXX test/cpp_headers/reduce.o 00:05:56.657 CXX test/cpp_headers/rpc.o 00:05:56.657 CXX test/cpp_headers/scheduler.o 00:05:56.657 CXX test/cpp_headers/scsi.o 00:05:56.657 CXX test/cpp_headers/scsi_spec.o 00:05:56.657 CXX test/cpp_headers/sock.o 00:05:56.657 CXX test/cpp_headers/stdinc.o 00:05:56.657 CXX test/cpp_headers/string.o 00:05:56.916 CXX test/cpp_headers/thread.o 00:05:56.917 CXX test/cpp_headers/trace.o 00:05:56.917 CXX test/cpp_headers/trace_parser.o 00:05:56.917 CXX test/cpp_headers/tree.o 00:05:56.917 CXX test/cpp_headers/ublk.o 00:05:56.917 CXX test/cpp_headers/util.o 00:05:56.917 CXX test/cpp_headers/uuid.o 00:05:56.917 CXX test/cpp_headers/version.o 00:05:56.917 CXX test/cpp_headers/vfio_user_pci.o 00:05:56.917 CXX test/cpp_headers/vfio_user_spec.o 00:05:56.917 CXX test/cpp_headers/vhost.o 00:05:56.917 CXX test/cpp_headers/vmd.o 00:05:57.176 CXX test/cpp_headers/xor.o 00:05:57.176 CXX test/cpp_headers/zipf.o 00:05:57.435 LINK cuse 00:05:58.387 LINK esnap 00:05:58.387 00:05:58.387 real 1m25.430s 00:05:58.387 user 7m9.202s 00:05:58.387 sys 1m10.888s 00:05:58.387 10:19:33 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:58.387 ************************************ 00:05:58.387 END TEST make 00:05:58.387 ************************************ 00:05:58.387 10:19:33 make -- common/autotest_common.sh@10 -- $ set +x 00:05:58.647 10:19:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:58.647 10:19:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:58.647 10:19:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:58.647 10:19:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.647 10:19:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:58.647 10:19:33 -- pm/common@44 -- $ pid=6042 00:05:58.647 10:19:33 -- pm/common@50 -- $ kill -TERM 6042 00:05:58.647 10:19:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.647 10:19:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:58.647 10:19:33 -- pm/common@44 -- $ pid=6043 00:05:58.647 10:19:33 -- pm/common@50 -- $ kill -TERM 6043 00:05:58.647 10:19:33 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.647 10:19:33 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.647 10:19:33 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.647 10:19:33 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.647 10:19:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.647 10:19:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.647 10:19:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.647 10:19:33 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.647 10:19:33 -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.647 10:19:33 -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.647 10:19:33 -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.647 10:19:33 -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.647 10:19:33 -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.647 10:19:33 -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.647 10:19:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.647 10:19:33 -- scripts/common.sh@344 -- # case "$op" in 00:05:58.647 10:19:33 -- scripts/common.sh@345 -- # : 1 00:05:58.647 10:19:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.647 10:19:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.647 10:19:33 -- scripts/common.sh@365 -- # decimal 1 00:05:58.647 10:19:33 -- scripts/common.sh@353 -- # local d=1 00:05:58.647 10:19:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.647 10:19:33 -- scripts/common.sh@355 -- # echo 1 00:05:58.647 10:19:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.647 10:19:33 -- scripts/common.sh@366 -- # decimal 2 00:05:58.647 10:19:33 -- scripts/common.sh@353 -- # local d=2 00:05:58.647 10:19:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.647 10:19:33 -- scripts/common.sh@355 -- # echo 2 00:05:58.647 10:19:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.647 10:19:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.647 10:19:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.647 10:19:33 -- scripts/common.sh@368 -- # return 0 00:05:58.647 10:19:33 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.647 10:19:33 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.647 --rc genhtml_branch_coverage=1 00:05:58.647 --rc genhtml_function_coverage=1 00:05:58.647 --rc genhtml_legend=1 00:05:58.647 --rc geninfo_all_blocks=1 00:05:58.647 --rc geninfo_unexecuted_blocks=1 00:05:58.647 00:05:58.647 ' 00:05:58.647 10:19:33 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.647 --rc genhtml_branch_coverage=1 00:05:58.647 --rc genhtml_function_coverage=1 00:05:58.647 --rc genhtml_legend=1 00:05:58.647 --rc geninfo_all_blocks=1 00:05:58.647 --rc geninfo_unexecuted_blocks=1 00:05:58.647 00:05:58.647 ' 00:05:58.647 10:19:33 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.647 --rc genhtml_branch_coverage=1 00:05:58.647 --rc genhtml_function_coverage=1 00:05:58.647 --rc genhtml_legend=1 00:05:58.647 --rc geninfo_all_blocks=1 00:05:58.647 --rc geninfo_unexecuted_blocks=1 00:05:58.647 00:05:58.647 ' 00:05:58.647 10:19:33 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.647 --rc genhtml_branch_coverage=1 00:05:58.647 --rc genhtml_function_coverage=1 00:05:58.647 --rc genhtml_legend=1 00:05:58.647 --rc geninfo_all_blocks=1 00:05:58.647 --rc geninfo_unexecuted_blocks=1 00:05:58.647 00:05:58.647 ' 00:05:58.647 10:19:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.647 10:19:33 -- nvmf/common.sh@7 -- # uname -s 00:05:58.647 10:19:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.647 10:19:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.647 10:19:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.647 10:19:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.647 10:19:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.647 10:19:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.647 10:19:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.647 10:19:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.647 10:19:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.647 10:19:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.647 10:19:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:05:58.647 10:19:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:05:58.648 10:19:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.648 10:19:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.648 10:19:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:58.648 10:19:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.648 10:19:33 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.648 10:19:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.648 10:19:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.648 10:19:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.648 10:19:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.648 10:19:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.648 10:19:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.648 10:19:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.648 10:19:33 -- paths/export.sh@5 -- # export PATH 00:05:58.648 10:19:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.648 10:19:33 -- nvmf/common.sh@51 -- # : 0 00:05:58.648 10:19:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.648 10:19:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.648 10:19:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.648 10:19:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.648 10:19:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.648 10:19:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.648 10:19:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.648 10:19:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.648 10:19:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.648 10:19:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:58.648 10:19:33 -- spdk/autotest.sh@32 -- # uname -s 00:05:58.648 10:19:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:58.648 10:19:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:58.648 10:19:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:58.648 10:19:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:58.648 10:19:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:58.648 10:19:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:58.908 10:19:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:58.908 10:19:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:58.908 10:19:33 -- spdk/autotest.sh@48 -- # udevadm_pid=67531 00:05:58.908 10:19:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:58.908 10:19:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:58.908 10:19:33 -- pm/common@17 -- # local monitor 00:05:58.908 10:19:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.908 10:19:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.908 10:19:33 -- pm/common@25 -- # sleep 1 00:05:58.908 10:19:33 -- pm/common@21 -- # date +%s 00:05:58.908 10:19:33 -- pm/common@21 -- # date +%s 00:05:58.908 10:19:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733825973 00:05:58.908 10:19:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733825973 00:05:58.908 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733825973_collect-vmstat.pm.log 00:05:58.908 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733825973_collect-cpu-load.pm.log 00:05:59.859 10:19:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:59.859 10:19:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:59.859 10:19:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.859 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:59.859 10:19:34 -- spdk/autotest.sh@59 -- # create_test_list 00:05:59.859 10:19:34 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:59.859 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:59.859 10:19:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:59.859 10:19:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:59.859 10:19:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:59.859 10:19:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:59.859 10:19:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:59.860 10:19:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:59.860 10:19:34 -- common/autotest_common.sh@1455 -- # uname 00:05:59.860 10:19:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:59.860 10:19:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:59.860 10:19:34 -- common/autotest_common.sh@1475 -- # uname 00:05:59.860 10:19:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:59.860 10:19:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:59.860 10:19:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:59.860 lcov: LCOV version 1.15 00:05:59.860 10:19:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:14.760 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:14.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:32.850 10:20:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:32.850 10:20:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.850 10:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:32.850 10:20:04 -- spdk/autotest.sh@78 -- # rm -f 00:06:32.850 10:20:04 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:32.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.850 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:32.850 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:32.850 10:20:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:32.850 10:20:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:32.850 10:20:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:32.850 10:20:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:32.850 10:20:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:32.850 10:20:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:32.850 10:20:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:32.850 10:20:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:32.850 10:20:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:32.850 10:20:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:32.850 10:20:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:32.850 10:20:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:32.850 10:20:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:32.850 10:20:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:32.850 10:20:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:32.850 10:20:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:32.850 10:20:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:32.850 10:20:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:32.850 10:20:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:32.850 10:20:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:32.850 10:20:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:32.850 10:20:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:32.850 10:20:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:32.850 10:20:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:32.850 No valid GPT data, bailing 00:06:32.850 10:20:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:32.850 10:20:05 -- scripts/common.sh@394 -- # pt= 00:06:32.850 10:20:05 -- scripts/common.sh@395 -- # return 1 00:06:32.850 10:20:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:32.850 1+0 records in 00:06:32.850 1+0 records out 00:06:32.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469669 s, 223 MB/s 00:06:32.850 10:20:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:32.850 10:20:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:32.850 10:20:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:32.850 10:20:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:32.850 10:20:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:32.850 No valid GPT data, bailing 00:06:32.850 10:20:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:32.850 10:20:05 -- scripts/common.sh@394 -- # pt= 00:06:32.850 10:20:05 -- scripts/common.sh@395 -- # return 1 00:06:32.850 10:20:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:32.850 1+0 records in 00:06:32.850 1+0 records out 00:06:32.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515822 s, 203 MB/s 00:06:32.850 10:20:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:32.850 10:20:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:32.850 10:20:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:32.850 10:20:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:32.850 10:20:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:32.850 No valid GPT data, bailing 00:06:32.850 10:20:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:32.850 10:20:05 -- scripts/common.sh@394 -- # pt= 00:06:32.850 10:20:05 -- scripts/common.sh@395 -- # return 1 00:06:32.850 10:20:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:32.850 1+0 records in 00:06:32.850 1+0 records out 00:06:32.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437713 s, 240 MB/s 00:06:32.850 10:20:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:32.850 10:20:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:32.850 10:20:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:32.850 10:20:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:32.850 10:20:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:32.850 No valid GPT data, bailing 00:06:32.850 10:20:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:32.850 10:20:06 -- scripts/common.sh@394 -- # pt= 00:06:32.850 10:20:06 -- scripts/common.sh@395 -- # return 1 00:06:32.850 10:20:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:32.850 1+0 records in 00:06:32.850 1+0 records out 00:06:32.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041987 s, 250 MB/s 00:06:32.850 10:20:06 -- spdk/autotest.sh@105 -- # sync 00:06:32.850 10:20:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:32.850 10:20:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:32.850 10:20:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:33.110 10:20:08 -- spdk/autotest.sh@111 -- # uname -s 00:06:33.110 10:20:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:33.110 10:20:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:33.110 10:20:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:33.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.678 Hugepages 00:06:33.678 node hugesize free / total 00:06:33.678 node0 1048576kB 0 / 0 00:06:33.678 node0 2048kB 0 / 0 00:06:33.678 00:06:33.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:33.938 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:33.938 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:33.938 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:33.938 10:20:09 -- spdk/autotest.sh@117 -- # uname -s 00:06:33.938 10:20:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:33.938 10:20:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:33.938 10:20:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:34.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.874 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.874 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.874 10:20:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:35.810 10:20:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:35.810 10:20:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:35.810 10:20:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:35.810 10:20:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:35.810 10:20:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:35.810 10:20:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:35.810 10:20:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:35.810 10:20:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.810 10:20:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:35.810 10:20:11 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:35.810 10:20:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:35.810 10:20:11 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:36.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.378 Waiting for block devices as requested 00:06:36.378 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.378 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.378 10:20:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:36.637 10:20:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:36.637 10:20:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:36.637 10:20:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:36.637 10:20:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:36.637 10:20:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1541 -- # continue 00:06:36.637 10:20:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:36.637 10:20:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:36.637 10:20:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:36.637 10:20:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:36.637 10:20:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:36.637 10:20:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:36.637 10:20:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:36.637 10:20:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:36.637 10:20:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:36.637 10:20:11 -- common/autotest_common.sh@1541 -- # continue 00:06:36.637 10:20:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:36.637 10:20:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.637 10:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:36.637 10:20:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:36.637 10:20:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.637 10:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:36.637 10:20:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:37.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:37.293 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:37.293 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:37.552 10:20:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:37.552 10:20:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.552 10:20:12 -- common/autotest_common.sh@10 -- # set +x 00:06:37.552 10:20:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:37.552 10:20:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:37.552 10:20:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:37.552 10:20:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:37.552 10:20:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:37.552 10:20:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:37.552 10:20:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:37.552 10:20:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:37.552 10:20:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:37.552 10:20:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:37.552 10:20:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:37.552 10:20:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:37.552 10:20:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:37.552 10:20:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:37.552 10:20:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:37.552 10:20:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:37.552 10:20:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:37.552 10:20:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:37.552 10:20:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:37.552 10:20:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:37.552 10:20:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:37.552 10:20:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:37.552 10:20:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:37.552 10:20:12 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:37.552 10:20:12 -- common/autotest_common.sh@1570 -- # return 0 00:06:37.552 10:20:12 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:37.552 10:20:12 -- common/autotest_common.sh@1578 -- # return 0 00:06:37.552 10:20:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:37.552 10:20:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:37.552 10:20:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:37.552 10:20:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:37.552 10:20:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:37.552 10:20:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.552 10:20:12 -- common/autotest_common.sh@10 -- # set +x 00:06:37.552 10:20:12 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:37.552 10:20:12 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:37.552 10:20:12 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:37.552 10:20:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:37.552 10:20:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.552 10:20:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.552 10:20:12 -- common/autotest_common.sh@10 -- # set +x 00:06:37.552 ************************************ 00:06:37.552 START TEST env 00:06:37.552 ************************************ 00:06:37.552 10:20:12 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:37.812 * Looking for test storage... 00:06:37.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.812 10:20:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.812 10:20:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.812 10:20:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.812 10:20:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.812 10:20:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.812 10:20:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.812 10:20:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.812 10:20:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.812 10:20:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.812 10:20:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.812 10:20:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.812 10:20:12 env -- scripts/common.sh@344 -- # case "$op" in 00:06:37.812 10:20:12 env -- scripts/common.sh@345 -- # : 1 00:06:37.812 10:20:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.812 10:20:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.812 10:20:12 env -- scripts/common.sh@365 -- # decimal 1 00:06:37.812 10:20:12 env -- scripts/common.sh@353 -- # local d=1 00:06:37.812 10:20:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.812 10:20:12 env -- scripts/common.sh@355 -- # echo 1 00:06:37.812 10:20:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.812 10:20:12 env -- scripts/common.sh@366 -- # decimal 2 00:06:37.812 10:20:12 env -- scripts/common.sh@353 -- # local d=2 00:06:37.812 10:20:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.812 10:20:12 env -- scripts/common.sh@355 -- # echo 2 00:06:37.812 10:20:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.812 10:20:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.812 10:20:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.812 10:20:12 env -- scripts/common.sh@368 -- # return 0 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.812 --rc genhtml_branch_coverage=1 00:06:37.812 --rc genhtml_function_coverage=1 00:06:37.812 --rc genhtml_legend=1 00:06:37.812 --rc geninfo_all_blocks=1 00:06:37.812 --rc geninfo_unexecuted_blocks=1 00:06:37.812 00:06:37.812 ' 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.812 --rc genhtml_branch_coverage=1 00:06:37.812 --rc genhtml_function_coverage=1 00:06:37.812 --rc genhtml_legend=1 00:06:37.812 --rc geninfo_all_blocks=1 00:06:37.812 --rc geninfo_unexecuted_blocks=1 00:06:37.812 00:06:37.812 ' 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.812 --rc genhtml_branch_coverage=1 00:06:37.812 --rc genhtml_function_coverage=1 00:06:37.812 --rc genhtml_legend=1 00:06:37.812 --rc geninfo_all_blocks=1 00:06:37.812 --rc geninfo_unexecuted_blocks=1 00:06:37.812 00:06:37.812 ' 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.812 --rc genhtml_branch_coverage=1 00:06:37.812 --rc genhtml_function_coverage=1 00:06:37.812 --rc genhtml_legend=1 00:06:37.812 --rc geninfo_all_blocks=1 00:06:37.812 --rc geninfo_unexecuted_blocks=1 00:06:37.812 00:06:37.812 ' 00:06:37.812 10:20:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.812 10:20:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.812 10:20:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.812 ************************************ 00:06:37.812 START TEST env_memory 00:06:37.812 ************************************ 00:06:37.812 10:20:12 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:37.812 00:06:37.812 00:06:37.812 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.812 http://cunit.sourceforge.net/ 00:06:37.812 00:06:37.812 00:06:37.812 Suite: memory 00:06:37.812 Test: alloc and free memory map ...[2024-12-10 10:20:12.945465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:37.812 passed 00:06:37.812 Test: mem map translation ...[2024-12-10 10:20:12.976342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:37.812 [2024-12-10 10:20:12.976380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:37.812 [2024-12-10 10:20:12.976444] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:37.812 [2024-12-10 10:20:12.976457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:37.812 passed 00:06:38.072 Test: mem map registration ...[2024-12-10 10:20:13.040073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:38.072 [2024-12-10 10:20:13.040107] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:38.072 passed 00:06:38.072 Test: mem map adjacent registrations ...passed 00:06:38.072 00:06:38.072 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.072 suites 1 1 n/a 0 0 00:06:38.072 tests 4 4 4 0 0 00:06:38.072 asserts 152 152 152 0 n/a 00:06:38.072 00:06:38.072 Elapsed time = 0.213 seconds 00:06:38.072 00:06:38.072 real 0m0.230s 00:06:38.072 user 0m0.217s 00:06:38.072 sys 0m0.010s 00:06:38.072 10:20:13 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.072 ************************************ 00:06:38.072 END TEST env_memory 00:06:38.072 ************************************ 00:06:38.072 10:20:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:38.072 10:20:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:38.072 10:20:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.072 10:20:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.072 10:20:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.072 ************************************ 00:06:38.072 START TEST env_vtophys 00:06:38.072 ************************************ 00:06:38.072 10:20:13 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:38.072 EAL: lib.eal log level changed from notice to debug 00:06:38.072 EAL: Detected lcore 0 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 1 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 2 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 3 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 4 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 5 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 6 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 7 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 8 as core 0 on socket 0 00:06:38.072 EAL: Detected lcore 9 as core 0 on socket 0 00:06:38.072 EAL: Maximum logical cores by configuration: 128 00:06:38.072 EAL: Detected CPU lcores: 10 00:06:38.072 EAL: Detected NUMA nodes: 1 00:06:38.072 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:38.072 EAL: Detected shared linkage of DPDK 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:38.072 EAL: Registered [vdev] bus. 00:06:38.072 EAL: bus.vdev log level changed from disabled to notice 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:38.072 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:38.072 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:38.072 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:38.072 EAL: No shared files mode enabled, IPC will be disabled 00:06:38.072 EAL: No shared files mode enabled, IPC is disabled 00:06:38.072 EAL: Selected IOVA mode 'PA' 00:06:38.072 EAL: Probing VFIO support... 00:06:38.072 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:38.072 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:38.072 EAL: Ask a virtual area of 0x2e000 bytes 00:06:38.072 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:38.072 EAL: Setting up physically contiguous memory... 00:06:38.072 EAL: Setting maximum number of open files to 524288 00:06:38.072 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:38.072 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:38.072 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.072 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:38.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.072 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.072 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:38.072 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:38.072 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.072 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:38.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.072 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.072 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:38.072 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:38.072 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.072 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:38.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.072 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.072 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:38.072 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:38.072 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.072 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:38.072 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.072 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.072 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:38.072 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:38.072 EAL: Hugepages will be freed exactly as allocated. 00:06:38.072 EAL: No shared files mode enabled, IPC is disabled 00:06:38.072 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: TSC frequency is ~2200000 KHz 00:06:38.332 EAL: Main lcore 0 is ready (tid=7f4a7ca67a00;cpuset=[0]) 00:06:38.332 EAL: Trying to obtain current memory policy. 00:06:38.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.332 EAL: Restoring previous memory policy: 0 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: Heap on socket 0 was expanded by 2MB 00:06:38.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:38.332 EAL: Mem event callback 'spdk:(nil)' registered 00:06:38.332 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:38.332 00:06:38.332 00:06:38.332 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.332 http://cunit.sourceforge.net/ 00:06:38.332 00:06:38.332 00:06:38.332 Suite: components_suite 00:06:38.332 Test: vtophys_malloc_test ...passed 00:06:38.332 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:38.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.332 EAL: Restoring previous memory policy: 4 00:06:38.332 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: Heap on socket 0 was expanded by 4MB 00:06:38.332 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: Heap on socket 0 was shrunk by 4MB 00:06:38.332 EAL: Trying to obtain current memory policy. 00:06:38.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.332 EAL: Restoring previous memory policy: 4 00:06:38.332 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: Heap on socket 0 was expanded by 6MB 00:06:38.332 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: Heap on socket 0 was shrunk by 6MB 00:06:38.332 EAL: Trying to obtain current memory policy. 00:06:38.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.332 EAL: Restoring previous memory policy: 4 00:06:38.332 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.332 EAL: Heap on socket 0 was expanded by 10MB 00:06:38.332 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.332 EAL: request: mp_malloc_sync 00:06:38.332 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was shrunk by 10MB 00:06:38.333 EAL: Trying to obtain current memory policy. 00:06:38.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.333 EAL: Restoring previous memory policy: 4 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was expanded by 18MB 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was shrunk by 18MB 00:06:38.333 EAL: Trying to obtain current memory policy. 00:06:38.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.333 EAL: Restoring previous memory policy: 4 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was expanded by 34MB 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was shrunk by 34MB 00:06:38.333 EAL: Trying to obtain current memory policy. 00:06:38.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.333 EAL: Restoring previous memory policy: 4 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was expanded by 66MB 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was shrunk by 66MB 00:06:38.333 EAL: Trying to obtain current memory policy. 00:06:38.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.333 EAL: Restoring previous memory policy: 4 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was expanded by 130MB 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was shrunk by 130MB 00:06:38.333 EAL: Trying to obtain current memory policy. 00:06:38.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.333 EAL: Restoring previous memory policy: 4 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was expanded by 258MB 00:06:38.333 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.333 EAL: request: mp_malloc_sync 00:06:38.333 EAL: No shared files mode enabled, IPC is disabled 00:06:38.333 EAL: Heap on socket 0 was shrunk by 258MB 00:06:38.333 EAL: Trying to obtain current memory policy. 00:06:38.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.592 EAL: Restoring previous memory policy: 4 00:06:38.592 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.592 EAL: request: mp_malloc_sync 00:06:38.592 EAL: No shared files mode enabled, IPC is disabled 00:06:38.592 EAL: Heap on socket 0 was expanded by 514MB 00:06:38.592 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.592 EAL: request: mp_malloc_sync 00:06:38.592 EAL: No shared files mode enabled, IPC is disabled 00:06:38.592 EAL: Heap on socket 0 was shrunk by 514MB 00:06:38.592 EAL: Trying to obtain current memory policy. 00:06:38.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.851 EAL: Restoring previous memory policy: 4 00:06:38.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.851 EAL: request: mp_malloc_sync 00:06:38.851 EAL: No shared files mode enabled, IPC is disabled 00:06:38.851 EAL: Heap on socket 0 was expanded by 1026MB 00:06:38.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.851 passed 00:06:38.851 00:06:38.851 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.851 suites 1 1 n/a 0 0 00:06:38.851 tests 2 2 2 0 0 00:06:38.851 asserts 5302 5302 5302 0 n/a 00:06:38.851 00:06:38.851 Elapsed time = 0.701 seconds 00:06:38.851 EAL: request: mp_malloc_sync 00:06:38.851 EAL: No shared files mode enabled, IPC is disabled 00:06:38.851 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:38.851 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.851 EAL: request: mp_malloc_sync 00:06:38.851 EAL: No shared files mode enabled, IPC is disabled 00:06:38.851 EAL: Heap on socket 0 was shrunk by 2MB 00:06:38.851 EAL: No shared files mode enabled, IPC is disabled 00:06:38.851 EAL: No shared files mode enabled, IPC is disabled 00:06:38.851 EAL: No shared files mode enabled, IPC is disabled 00:06:38.851 00:06:38.852 real 0m0.891s 00:06:38.852 user 0m0.465s 00:06:38.852 sys 0m0.297s 00:06:38.852 10:20:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.852 ************************************ 00:06:38.852 END TEST env_vtophys 00:06:38.852 ************************************ 00:06:38.852 10:20:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 10:20:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:39.111 10:20:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.111 10:20:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.111 10:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 ************************************ 00:06:39.111 START TEST env_pci 00:06:39.111 ************************************ 00:06:39.111 10:20:14 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:39.111 00:06:39.111 00:06:39.111 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.111 http://cunit.sourceforge.net/ 00:06:39.111 00:06:39.111 00:06:39.111 Suite: pci 00:06:39.111 Test: pci_hook ...[2024-12-10 10:20:14.132802] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69756 has claimed it 00:06:39.111 passed 00:06:39.111 00:06:39.111 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.111 suites 1 1 n/a 0 0 00:06:39.111 tests 1 1 1 0 0 00:06:39.111 asserts 25 25 25 0 n/a 00:06:39.111 00:06:39.111 Elapsed time = 0.002 seconds 00:06:39.111 EAL: Cannot find device (10000:00:01.0) 00:06:39.111 EAL: Failed to attach device on primary process 00:06:39.111 00:06:39.111 real 0m0.019s 00:06:39.111 user 0m0.008s 00:06:39.111 sys 0m0.011s 00:06:39.111 10:20:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.111 ************************************ 00:06:39.111 END TEST env_pci 00:06:39.111 ************************************ 00:06:39.111 10:20:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 10:20:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:39.111 10:20:14 env -- env/env.sh@15 -- # uname 00:06:39.111 10:20:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:39.111 10:20:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:39.111 10:20:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:39.111 10:20:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:39.111 10:20:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.111 10:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 ************************************ 00:06:39.111 START TEST env_dpdk_post_init 00:06:39.111 ************************************ 00:06:39.111 10:20:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:39.111 EAL: Detected CPU lcores: 10 00:06:39.111 EAL: Detected NUMA nodes: 1 00:06:39.111 EAL: Detected shared linkage of DPDK 00:06:39.111 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.111 EAL: Selected IOVA mode 'PA' 00:06:39.370 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:39.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:39.371 Starting DPDK initialization... 00:06:39.371 Starting SPDK post initialization... 00:06:39.371 SPDK NVMe probe 00:06:39.371 Attaching to 0000:00:10.0 00:06:39.371 Attaching to 0000:00:11.0 00:06:39.371 Attached to 0000:00:10.0 00:06:39.371 Attached to 0000:00:11.0 00:06:39.371 Cleaning up... 00:06:39.371 00:06:39.371 real 0m0.179s 00:06:39.371 user 0m0.050s 00:06:39.371 sys 0m0.029s 00:06:39.371 10:20:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.371 10:20:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.371 ************************************ 00:06:39.371 END TEST env_dpdk_post_init 00:06:39.371 ************************************ 00:06:39.371 10:20:14 env -- env/env.sh@26 -- # uname 00:06:39.371 10:20:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:39.371 10:20:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.371 10:20:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.371 10:20:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.371 10:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.371 ************************************ 00:06:39.371 START TEST env_mem_callbacks 00:06:39.371 ************************************ 00:06:39.371 10:20:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.371 EAL: Detected CPU lcores: 10 00:06:39.371 EAL: Detected NUMA nodes: 1 00:06:39.371 EAL: Detected shared linkage of DPDK 00:06:39.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.371 EAL: Selected IOVA mode 'PA' 00:06:39.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.371 00:06:39.371 00:06:39.371 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.371 http://cunit.sourceforge.net/ 00:06:39.371 00:06:39.371 00:06:39.371 Suite: memory 00:06:39.371 Test: test ... 00:06:39.371 register 0x200000200000 2097152 00:06:39.371 malloc 3145728 00:06:39.371 register 0x200000400000 4194304 00:06:39.371 buf 0x200000500000 len 3145728 PASSED 00:06:39.371 malloc 64 00:06:39.371 buf 0x2000004fff40 len 64 PASSED 00:06:39.371 malloc 4194304 00:06:39.371 register 0x200000800000 6291456 00:06:39.371 buf 0x200000a00000 len 4194304 PASSED 00:06:39.371 free 0x200000500000 3145728 00:06:39.371 free 0x2000004fff40 64 00:06:39.371 unregister 0x200000400000 4194304 PASSED 00:06:39.371 free 0x200000a00000 4194304 00:06:39.371 unregister 0x200000800000 6291456 PASSED 00:06:39.371 malloc 8388608 00:06:39.371 register 0x200000400000 10485760 00:06:39.371 buf 0x200000600000 len 8388608 PASSED 00:06:39.371 free 0x200000600000 8388608 00:06:39.371 unregister 0x200000400000 10485760 PASSED 00:06:39.371 passed 00:06:39.371 00:06:39.371 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.371 suites 1 1 n/a 0 0 00:06:39.371 tests 1 1 1 0 0 00:06:39.371 asserts 15 15 15 0 n/a 00:06:39.371 00:06:39.371 Elapsed time = 0.008 seconds 00:06:39.371 00:06:39.371 real 0m0.140s 00:06:39.371 user 0m0.018s 00:06:39.371 sys 0m0.021s 00:06:39.371 10:20:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.371 ************************************ 00:06:39.371 END TEST env_mem_callbacks 00:06:39.371 ************************************ 00:06:39.371 10:20:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:39.630 00:06:39.630 real 0m1.917s 00:06:39.630 user 0m0.961s 00:06:39.630 sys 0m0.611s 00:06:39.630 10:20:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.630 ************************************ 00:06:39.630 END TEST env 00:06:39.630 ************************************ 00:06:39.630 10:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.630 10:20:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:39.630 10:20:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.630 10:20:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.630 10:20:14 -- common/autotest_common.sh@10 -- # set +x 00:06:39.630 ************************************ 00:06:39.630 START TEST rpc 00:06:39.630 ************************************ 00:06:39.630 10:20:14 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:39.630 * Looking for test storage... 00:06:39.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:39.630 10:20:14 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.630 10:20:14 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.630 10:20:14 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.630 10:20:14 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.630 10:20:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.630 10:20:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.630 10:20:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.630 10:20:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.630 10:20:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.630 10:20:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.630 10:20:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.630 10:20:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.630 10:20:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.630 10:20:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:39.630 10:20:14 rpc -- scripts/common.sh@345 -- # : 1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.630 10:20:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.630 10:20:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@353 -- # local d=1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.630 10:20:14 rpc -- scripts/common.sh@355 -- # echo 1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.630 10:20:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:39.630 10:20:14 rpc -- scripts/common.sh@353 -- # local d=2 00:06:39.630 10:20:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.630 10:20:14 rpc -- scripts/common.sh@355 -- # echo 2 00:06:39.890 10:20:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.890 10:20:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.890 10:20:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.890 10:20:14 rpc -- scripts/common.sh@368 -- # return 0 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.890 --rc genhtml_branch_coverage=1 00:06:39.890 --rc genhtml_function_coverage=1 00:06:39.890 --rc genhtml_legend=1 00:06:39.890 --rc geninfo_all_blocks=1 00:06:39.890 --rc geninfo_unexecuted_blocks=1 00:06:39.890 00:06:39.890 ' 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.890 --rc genhtml_branch_coverage=1 00:06:39.890 --rc genhtml_function_coverage=1 00:06:39.890 --rc genhtml_legend=1 00:06:39.890 --rc geninfo_all_blocks=1 00:06:39.890 --rc geninfo_unexecuted_blocks=1 00:06:39.890 00:06:39.890 ' 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.890 --rc genhtml_branch_coverage=1 00:06:39.890 --rc genhtml_function_coverage=1 00:06:39.890 --rc genhtml_legend=1 00:06:39.890 --rc geninfo_all_blocks=1 00:06:39.890 --rc geninfo_unexecuted_blocks=1 00:06:39.890 00:06:39.890 ' 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.890 --rc genhtml_branch_coverage=1 00:06:39.890 --rc genhtml_function_coverage=1 00:06:39.890 --rc genhtml_legend=1 00:06:39.890 --rc geninfo_all_blocks=1 00:06:39.890 --rc geninfo_unexecuted_blocks=1 00:06:39.890 00:06:39.890 ' 00:06:39.890 10:20:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69873 00:06:39.890 10:20:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.890 10:20:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:39.890 10:20:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69873 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@831 -- # '[' -z 69873 ']' 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.890 10:20:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.890 [2024-12-10 10:20:14.934178] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.890 [2024-12-10 10:20:14.934290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69873 ] 00:06:39.890 [2024-12-10 10:20:15.073412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.890 [2024-12-10 10:20:15.107328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:39.890 [2024-12-10 10:20:15.107421] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69873' to capture a snapshot of events at runtime. 00:06:39.890 [2024-12-10 10:20:15.107432] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.890 [2024-12-10 10:20:15.107439] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.890 [2024-12-10 10:20:15.107445] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69873 for offline analysis/debug. 00:06:39.890 [2024-12-10 10:20:15.107476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.149 [2024-12-10 10:20:15.144340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.149 10:20:15 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.149 10:20:15 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.149 10:20:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:40.149 10:20:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:40.149 10:20:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:40.149 10:20:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:40.149 10:20:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.149 10:20:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.149 10:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.149 ************************************ 00:06:40.149 START TEST rpc_integrity 00:06:40.149 ************************************ 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:40.149 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.149 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:40.408 { 00:06:40.408 "name": "Malloc0", 00:06:40.408 "aliases": [ 00:06:40.408 "aaf9f440-5250-4335-975e-3ff5908faa8f" 00:06:40.408 ], 00:06:40.408 "product_name": "Malloc disk", 00:06:40.408 "block_size": 512, 00:06:40.408 "num_blocks": 16384, 00:06:40.408 "uuid": "aaf9f440-5250-4335-975e-3ff5908faa8f", 00:06:40.408 "assigned_rate_limits": { 00:06:40.408 "rw_ios_per_sec": 0, 00:06:40.408 "rw_mbytes_per_sec": 0, 00:06:40.408 "r_mbytes_per_sec": 0, 00:06:40.408 "w_mbytes_per_sec": 0 00:06:40.408 }, 00:06:40.408 "claimed": false, 00:06:40.408 "zoned": false, 00:06:40.408 "supported_io_types": { 00:06:40.408 "read": true, 00:06:40.408 "write": true, 00:06:40.408 "unmap": true, 00:06:40.408 "flush": true, 00:06:40.408 "reset": true, 00:06:40.408 "nvme_admin": false, 00:06:40.408 "nvme_io": false, 00:06:40.408 "nvme_io_md": false, 00:06:40.408 "write_zeroes": true, 00:06:40.408 "zcopy": true, 00:06:40.408 "get_zone_info": false, 00:06:40.408 "zone_management": false, 00:06:40.408 "zone_append": false, 00:06:40.408 "compare": false, 00:06:40.408 "compare_and_write": false, 00:06:40.408 "abort": true, 00:06:40.408 "seek_hole": false, 00:06:40.408 "seek_data": false, 00:06:40.408 "copy": true, 00:06:40.408 "nvme_iov_md": false 00:06:40.408 }, 00:06:40.408 "memory_domains": [ 00:06:40.408 { 00:06:40.408 "dma_device_id": "system", 00:06:40.408 "dma_device_type": 1 00:06:40.408 }, 00:06:40.408 { 00:06:40.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.408 "dma_device_type": 2 00:06:40.408 } 00:06:40.408 ], 00:06:40.408 "driver_specific": {} 00:06:40.408 } 00:06:40.408 ]' 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.408 [2024-12-10 10:20:15.435598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:40.408 [2024-12-10 10:20:15.435675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.408 [2024-12-10 10:20:15.435692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1205030 00:06:40.408 [2024-12-10 10:20:15.435702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.408 [2024-12-10 10:20:15.437335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.408 [2024-12-10 10:20:15.437390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:40.408 Passthru0 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:40.408 { 00:06:40.408 "name": "Malloc0", 00:06:40.408 "aliases": [ 00:06:40.408 "aaf9f440-5250-4335-975e-3ff5908faa8f" 00:06:40.408 ], 00:06:40.408 "product_name": "Malloc disk", 00:06:40.408 "block_size": 512, 00:06:40.408 "num_blocks": 16384, 00:06:40.408 "uuid": "aaf9f440-5250-4335-975e-3ff5908faa8f", 00:06:40.408 "assigned_rate_limits": { 00:06:40.408 "rw_ios_per_sec": 0, 00:06:40.408 "rw_mbytes_per_sec": 0, 00:06:40.408 "r_mbytes_per_sec": 0, 00:06:40.408 "w_mbytes_per_sec": 0 00:06:40.408 }, 00:06:40.408 "claimed": true, 00:06:40.408 "claim_type": "exclusive_write", 00:06:40.408 "zoned": false, 00:06:40.408 "supported_io_types": { 00:06:40.408 "read": true, 00:06:40.408 "write": true, 00:06:40.408 "unmap": true, 00:06:40.408 "flush": true, 00:06:40.408 "reset": true, 00:06:40.408 "nvme_admin": false, 00:06:40.408 "nvme_io": false, 00:06:40.408 "nvme_io_md": false, 00:06:40.408 "write_zeroes": true, 00:06:40.408 "zcopy": true, 00:06:40.408 "get_zone_info": false, 00:06:40.408 "zone_management": false, 00:06:40.408 "zone_append": false, 00:06:40.408 "compare": false, 00:06:40.408 "compare_and_write": false, 00:06:40.408 "abort": true, 00:06:40.408 "seek_hole": false, 00:06:40.408 "seek_data": false, 00:06:40.408 "copy": true, 00:06:40.408 "nvme_iov_md": false 00:06:40.408 }, 00:06:40.408 "memory_domains": [ 00:06:40.408 { 00:06:40.408 "dma_device_id": "system", 00:06:40.408 "dma_device_type": 1 00:06:40.408 }, 00:06:40.408 { 00:06:40.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.408 "dma_device_type": 2 00:06:40.408 } 00:06:40.408 ], 00:06:40.408 "driver_specific": {} 00:06:40.408 }, 00:06:40.408 { 00:06:40.408 "name": "Passthru0", 00:06:40.408 "aliases": [ 00:06:40.408 "ccdfa7c2-abd4-5f2e-852d-f23725cb43c6" 00:06:40.408 ], 00:06:40.408 "product_name": "passthru", 00:06:40.408 "block_size": 512, 00:06:40.408 "num_blocks": 16384, 00:06:40.408 "uuid": "ccdfa7c2-abd4-5f2e-852d-f23725cb43c6", 00:06:40.408 "assigned_rate_limits": { 00:06:40.408 "rw_ios_per_sec": 0, 00:06:40.408 "rw_mbytes_per_sec": 0, 00:06:40.408 "r_mbytes_per_sec": 0, 00:06:40.408 "w_mbytes_per_sec": 0 00:06:40.408 }, 00:06:40.408 "claimed": false, 00:06:40.408 "zoned": false, 00:06:40.408 "supported_io_types": { 00:06:40.408 "read": true, 00:06:40.408 "write": true, 00:06:40.408 "unmap": true, 00:06:40.408 "flush": true, 00:06:40.408 "reset": true, 00:06:40.408 "nvme_admin": false, 00:06:40.408 "nvme_io": false, 00:06:40.408 "nvme_io_md": false, 00:06:40.408 "write_zeroes": true, 00:06:40.408 "zcopy": true, 00:06:40.408 "get_zone_info": false, 00:06:40.408 "zone_management": false, 00:06:40.408 "zone_append": false, 00:06:40.408 "compare": false, 00:06:40.408 "compare_and_write": false, 00:06:40.408 "abort": true, 00:06:40.408 "seek_hole": false, 00:06:40.408 "seek_data": false, 00:06:40.408 "copy": true, 00:06:40.408 "nvme_iov_md": false 00:06:40.408 }, 00:06:40.408 "memory_domains": [ 00:06:40.408 { 00:06:40.408 "dma_device_id": "system", 00:06:40.408 "dma_device_type": 1 00:06:40.408 }, 00:06:40.408 { 00:06:40.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.408 "dma_device_type": 2 00:06:40.408 } 00:06:40.408 ], 00:06:40.408 "driver_specific": { 00:06:40.408 "passthru": { 00:06:40.408 "name": "Passthru0", 00:06:40.408 "base_bdev_name": "Malloc0" 00:06:40.408 } 00:06:40.408 } 00:06:40.408 } 00:06:40.408 ]' 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:40.408 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.408 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.409 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.409 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.409 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:40.409 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:40.409 10:20:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:40.409 00:06:40.409 real 0m0.319s 00:06:40.409 user 0m0.213s 00:06:40.409 sys 0m0.040s 00:06:40.409 ************************************ 00:06:40.409 END TEST rpc_integrity 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.409 10:20:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.409 ************************************ 00:06:40.667 10:20:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:40.667 10:20:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.668 10:20:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.668 10:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 START TEST rpc_plugins 00:06:40.668 ************************************ 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:40.668 { 00:06:40.668 "name": "Malloc1", 00:06:40.668 "aliases": [ 00:06:40.668 "3d4ba37c-dd7d-45ea-85ea-185c01fa7928" 00:06:40.668 ], 00:06:40.668 "product_name": "Malloc disk", 00:06:40.668 "block_size": 4096, 00:06:40.668 "num_blocks": 256, 00:06:40.668 "uuid": "3d4ba37c-dd7d-45ea-85ea-185c01fa7928", 00:06:40.668 "assigned_rate_limits": { 00:06:40.668 "rw_ios_per_sec": 0, 00:06:40.668 "rw_mbytes_per_sec": 0, 00:06:40.668 "r_mbytes_per_sec": 0, 00:06:40.668 "w_mbytes_per_sec": 0 00:06:40.668 }, 00:06:40.668 "claimed": false, 00:06:40.668 "zoned": false, 00:06:40.668 "supported_io_types": { 00:06:40.668 "read": true, 00:06:40.668 "write": true, 00:06:40.668 "unmap": true, 00:06:40.668 "flush": true, 00:06:40.668 "reset": true, 00:06:40.668 "nvme_admin": false, 00:06:40.668 "nvme_io": false, 00:06:40.668 "nvme_io_md": false, 00:06:40.668 "write_zeroes": true, 00:06:40.668 "zcopy": true, 00:06:40.668 "get_zone_info": false, 00:06:40.668 "zone_management": false, 00:06:40.668 "zone_append": false, 00:06:40.668 "compare": false, 00:06:40.668 "compare_and_write": false, 00:06:40.668 "abort": true, 00:06:40.668 "seek_hole": false, 00:06:40.668 "seek_data": false, 00:06:40.668 "copy": true, 00:06:40.668 "nvme_iov_md": false 00:06:40.668 }, 00:06:40.668 "memory_domains": [ 00:06:40.668 { 00:06:40.668 "dma_device_id": "system", 00:06:40.668 "dma_device_type": 1 00:06:40.668 }, 00:06:40.668 { 00:06:40.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.668 "dma_device_type": 2 00:06:40.668 } 00:06:40.668 ], 00:06:40.668 "driver_specific": {} 00:06:40.668 } 00:06:40.668 ]' 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:40.668 10:20:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:40.668 00:06:40.668 real 0m0.163s 00:06:40.668 user 0m0.107s 00:06:40.668 sys 0m0.023s 00:06:40.668 ************************************ 00:06:40.668 END TEST rpc_plugins 00:06:40.668 ************************************ 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.668 10:20:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 10:20:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:40.668 10:20:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.668 10:20:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.668 10:20:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 START TEST rpc_trace_cmd_test 00:06:40.668 ************************************ 00:06:40.668 10:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:40.668 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:40.668 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:40.668 10:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.668 10:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.927 10:20:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.927 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:40.927 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69873", 00:06:40.927 "tpoint_group_mask": "0x8", 00:06:40.927 "iscsi_conn": { 00:06:40.927 "mask": "0x2", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "scsi": { 00:06:40.927 "mask": "0x4", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "bdev": { 00:06:40.927 "mask": "0x8", 00:06:40.927 "tpoint_mask": "0xffffffffffffffff" 00:06:40.927 }, 00:06:40.927 "nvmf_rdma": { 00:06:40.927 "mask": "0x10", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "nvmf_tcp": { 00:06:40.927 "mask": "0x20", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "ftl": { 00:06:40.927 "mask": "0x40", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "blobfs": { 00:06:40.927 "mask": "0x80", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "dsa": { 00:06:40.927 "mask": "0x200", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "thread": { 00:06:40.927 "mask": "0x400", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "nvme_pcie": { 00:06:40.927 "mask": "0x800", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "iaa": { 00:06:40.927 "mask": "0x1000", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "nvme_tcp": { 00:06:40.927 "mask": "0x2000", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "bdev_nvme": { 00:06:40.927 "mask": "0x4000", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "sock": { 00:06:40.927 "mask": "0x8000", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "blob": { 00:06:40.927 "mask": "0x10000", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 }, 00:06:40.927 "bdev_raid": { 00:06:40.927 "mask": "0x20000", 00:06:40.927 "tpoint_mask": "0x0" 00:06:40.927 } 00:06:40.927 }' 00:06:40.927 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:40.927 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:40.927 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:40.927 10:20:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:40.927 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:40.927 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:40.927 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:40.927 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:40.927 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:41.186 10:20:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:41.186 00:06:41.186 real 0m0.290s 00:06:41.186 user 0m0.253s 00:06:41.186 sys 0m0.024s 00:06:41.186 ************************************ 00:06:41.186 END TEST rpc_trace_cmd_test 00:06:41.186 10:20:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.186 10:20:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.186 ************************************ 00:06:41.186 10:20:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:41.186 10:20:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:41.186 10:20:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:41.186 10:20:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.186 10:20:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.186 10:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.186 ************************************ 00:06:41.186 START TEST rpc_daemon_integrity 00:06:41.186 ************************************ 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:41.186 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:41.187 { 00:06:41.187 "name": "Malloc2", 00:06:41.187 "aliases": [ 00:06:41.187 "cfaa6067-6ea1-4f49-8038-db1d5a5da327" 00:06:41.187 ], 00:06:41.187 "product_name": "Malloc disk", 00:06:41.187 "block_size": 512, 00:06:41.187 "num_blocks": 16384, 00:06:41.187 "uuid": "cfaa6067-6ea1-4f49-8038-db1d5a5da327", 00:06:41.187 "assigned_rate_limits": { 00:06:41.187 "rw_ios_per_sec": 0, 00:06:41.187 "rw_mbytes_per_sec": 0, 00:06:41.187 "r_mbytes_per_sec": 0, 00:06:41.187 "w_mbytes_per_sec": 0 00:06:41.187 }, 00:06:41.187 "claimed": false, 00:06:41.187 "zoned": false, 00:06:41.187 "supported_io_types": { 00:06:41.187 "read": true, 00:06:41.187 "write": true, 00:06:41.187 "unmap": true, 00:06:41.187 "flush": true, 00:06:41.187 "reset": true, 00:06:41.187 "nvme_admin": false, 00:06:41.187 "nvme_io": false, 00:06:41.187 "nvme_io_md": false, 00:06:41.187 "write_zeroes": true, 00:06:41.187 "zcopy": true, 00:06:41.187 "get_zone_info": false, 00:06:41.187 "zone_management": false, 00:06:41.187 "zone_append": false, 00:06:41.187 "compare": false, 00:06:41.187 "compare_and_write": false, 00:06:41.187 "abort": true, 00:06:41.187 "seek_hole": false, 00:06:41.187 "seek_data": false, 00:06:41.187 "copy": true, 00:06:41.187 "nvme_iov_md": false 00:06:41.187 }, 00:06:41.187 "memory_domains": [ 00:06:41.187 { 00:06:41.187 "dma_device_id": "system", 00:06:41.187 "dma_device_type": 1 00:06:41.187 }, 00:06:41.187 { 00:06:41.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.187 "dma_device_type": 2 00:06:41.187 } 00:06:41.187 ], 00:06:41.187 "driver_specific": {} 00:06:41.187 } 00:06:41.187 ]' 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.187 [2024-12-10 10:20:16.367980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:41.187 [2024-12-10 10:20:16.368031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.187 [2024-12-10 10:20:16.368049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1207ce0 00:06:41.187 [2024-12-10 10:20:16.368060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.187 [2024-12-10 10:20:16.369602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.187 [2024-12-10 10:20:16.369653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:41.187 Passthru0 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.187 { 00:06:41.187 "name": "Malloc2", 00:06:41.187 "aliases": [ 00:06:41.187 "cfaa6067-6ea1-4f49-8038-db1d5a5da327" 00:06:41.187 ], 00:06:41.187 "product_name": "Malloc disk", 00:06:41.187 "block_size": 512, 00:06:41.187 "num_blocks": 16384, 00:06:41.187 "uuid": "cfaa6067-6ea1-4f49-8038-db1d5a5da327", 00:06:41.187 "assigned_rate_limits": { 00:06:41.187 "rw_ios_per_sec": 0, 00:06:41.187 "rw_mbytes_per_sec": 0, 00:06:41.187 "r_mbytes_per_sec": 0, 00:06:41.187 "w_mbytes_per_sec": 0 00:06:41.187 }, 00:06:41.187 "claimed": true, 00:06:41.187 "claim_type": "exclusive_write", 00:06:41.187 "zoned": false, 00:06:41.187 "supported_io_types": { 00:06:41.187 "read": true, 00:06:41.187 "write": true, 00:06:41.187 "unmap": true, 00:06:41.187 "flush": true, 00:06:41.187 "reset": true, 00:06:41.187 "nvme_admin": false, 00:06:41.187 "nvme_io": false, 00:06:41.187 "nvme_io_md": false, 00:06:41.187 "write_zeroes": true, 00:06:41.187 "zcopy": true, 00:06:41.187 "get_zone_info": false, 00:06:41.187 "zone_management": false, 00:06:41.187 "zone_append": false, 00:06:41.187 "compare": false, 00:06:41.187 "compare_and_write": false, 00:06:41.187 "abort": true, 00:06:41.187 "seek_hole": false, 00:06:41.187 "seek_data": false, 00:06:41.187 "copy": true, 00:06:41.187 "nvme_iov_md": false 00:06:41.187 }, 00:06:41.187 "memory_domains": [ 00:06:41.187 { 00:06:41.187 "dma_device_id": "system", 00:06:41.187 "dma_device_type": 1 00:06:41.187 }, 00:06:41.187 { 00:06:41.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.187 "dma_device_type": 2 00:06:41.187 } 00:06:41.187 ], 00:06:41.187 "driver_specific": {} 00:06:41.187 }, 00:06:41.187 { 00:06:41.187 "name": "Passthru0", 00:06:41.187 "aliases": [ 00:06:41.187 "bcc6af9c-1928-510c-b408-1ef73c15defd" 00:06:41.187 ], 00:06:41.187 "product_name": "passthru", 00:06:41.187 "block_size": 512, 00:06:41.187 "num_blocks": 16384, 00:06:41.187 "uuid": "bcc6af9c-1928-510c-b408-1ef73c15defd", 00:06:41.187 "assigned_rate_limits": { 00:06:41.187 "rw_ios_per_sec": 0, 00:06:41.187 "rw_mbytes_per_sec": 0, 00:06:41.187 "r_mbytes_per_sec": 0, 00:06:41.187 "w_mbytes_per_sec": 0 00:06:41.187 }, 00:06:41.187 "claimed": false, 00:06:41.187 "zoned": false, 00:06:41.187 "supported_io_types": { 00:06:41.187 "read": true, 00:06:41.187 "write": true, 00:06:41.187 "unmap": true, 00:06:41.187 "flush": true, 00:06:41.187 "reset": true, 00:06:41.187 "nvme_admin": false, 00:06:41.187 "nvme_io": false, 00:06:41.187 "nvme_io_md": false, 00:06:41.187 "write_zeroes": true, 00:06:41.187 "zcopy": true, 00:06:41.187 "get_zone_info": false, 00:06:41.187 "zone_management": false, 00:06:41.187 "zone_append": false, 00:06:41.187 "compare": false, 00:06:41.187 "compare_and_write": false, 00:06:41.187 "abort": true, 00:06:41.187 "seek_hole": false, 00:06:41.187 "seek_data": false, 00:06:41.187 "copy": true, 00:06:41.187 "nvme_iov_md": false 00:06:41.187 }, 00:06:41.187 "memory_domains": [ 00:06:41.187 { 00:06:41.187 "dma_device_id": "system", 00:06:41.187 "dma_device_type": 1 00:06:41.187 }, 00:06:41.187 { 00:06:41.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.187 "dma_device_type": 2 00:06:41.187 } 00:06:41.187 ], 00:06:41.187 "driver_specific": { 00:06:41.187 "passthru": { 00:06:41.187 "name": "Passthru0", 00:06:41.187 "base_bdev_name": "Malloc2" 00:06:41.187 } 00:06:41.187 } 00:06:41.187 } 00:06:41.187 ]' 00:06:41.187 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.447 00:06:41.447 real 0m0.322s 00:06:41.447 user 0m0.222s 00:06:41.447 sys 0m0.038s 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.447 ************************************ 00:06:41.447 END TEST rpc_daemon_integrity 00:06:41.447 ************************************ 00:06:41.447 10:20:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.447 10:20:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:41.447 10:20:16 rpc -- rpc/rpc.sh@84 -- # killprocess 69873 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 69873 ']' 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@954 -- # kill -0 69873 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@955 -- # uname 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69873 00:06:41.447 killing process with pid 69873 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69873' 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@969 -- # kill 69873 00:06:41.447 10:20:16 rpc -- common/autotest_common.sh@974 -- # wait 69873 00:06:41.706 00:06:41.706 real 0m2.191s 00:06:41.706 user 0m2.960s 00:06:41.706 sys 0m0.585s 00:06:41.706 ************************************ 00:06:41.706 END TEST rpc 00:06:41.706 ************************************ 00:06:41.706 10:20:16 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.706 10:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.706 10:20:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:41.706 10:20:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.706 10:20:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.706 10:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:41.706 ************************************ 00:06:41.706 START TEST skip_rpc 00:06:41.706 ************************************ 00:06:41.706 10:20:16 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:41.966 * Looking for test storage... 00:06:41.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:41.966 10:20:16 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.966 10:20:16 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.966 10:20:16 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.966 10:20:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.966 --rc genhtml_branch_coverage=1 00:06:41.966 --rc genhtml_function_coverage=1 00:06:41.966 --rc genhtml_legend=1 00:06:41.966 --rc geninfo_all_blocks=1 00:06:41.966 --rc geninfo_unexecuted_blocks=1 00:06:41.966 00:06:41.966 ' 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.966 --rc genhtml_branch_coverage=1 00:06:41.966 --rc genhtml_function_coverage=1 00:06:41.966 --rc genhtml_legend=1 00:06:41.966 --rc geninfo_all_blocks=1 00:06:41.966 --rc geninfo_unexecuted_blocks=1 00:06:41.966 00:06:41.966 ' 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.966 --rc genhtml_branch_coverage=1 00:06:41.966 --rc genhtml_function_coverage=1 00:06:41.966 --rc genhtml_legend=1 00:06:41.966 --rc geninfo_all_blocks=1 00:06:41.966 --rc geninfo_unexecuted_blocks=1 00:06:41.966 00:06:41.966 ' 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.966 --rc genhtml_branch_coverage=1 00:06:41.966 --rc genhtml_function_coverage=1 00:06:41.966 --rc genhtml_legend=1 00:06:41.966 --rc geninfo_all_blocks=1 00:06:41.966 --rc geninfo_unexecuted_blocks=1 00:06:41.966 00:06:41.966 ' 00:06:41.966 10:20:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:41.966 10:20:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:41.966 10:20:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.966 10:20:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.966 ************************************ 00:06:41.966 START TEST skip_rpc 00:06:41.966 ************************************ 00:06:41.966 10:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:41.966 10:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70072 00:06:41.966 10:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.966 10:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:41.966 10:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:42.226 [2024-12-10 10:20:17.199505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.226 [2024-12-10 10:20:17.199803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70072 ] 00:06:42.226 [2024-12-10 10:20:17.340032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.226 [2024-12-10 10:20:17.373476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.226 [2024-12-10 10:20:17.409228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.499 10:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70072 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70072 ']' 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70072 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70072 00:06:47.500 killing process with pid 70072 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70072' 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70072 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70072 00:06:47.500 00:06:47.500 real 0m5.299s 00:06:47.500 user 0m5.017s 00:06:47.500 sys 0m0.196s 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.500 ************************************ 00:06:47.500 END TEST skip_rpc 00:06:47.500 ************************************ 00:06:47.500 10:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.500 10:20:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:47.500 10:20:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.500 10:20:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.500 10:20:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.500 ************************************ 00:06:47.500 START TEST skip_rpc_with_json 00:06:47.500 ************************************ 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70158 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70158 00:06:47.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70158 ']' 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.500 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.500 [2024-12-10 10:20:22.551763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.500 [2024-12-10 10:20:22.551877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70158 ] 00:06:47.500 [2024-12-10 10:20:22.691134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.760 [2024-12-10 10:20:22.727073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.760 [2024-12-10 10:20:22.763512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.760 [2024-12-10 10:20:22.886089] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:47.760 request: 00:06:47.760 { 00:06:47.760 "trtype": "tcp", 00:06:47.760 "method": "nvmf_get_transports", 00:06:47.760 "req_id": 1 00:06:47.760 } 00:06:47.760 Got JSON-RPC error response 00:06:47.760 response: 00:06:47.760 { 00:06:47.760 "code": -19, 00:06:47.760 "message": "No such device" 00:06:47.760 } 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.760 [2024-12-10 10:20:22.898213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.760 10:20:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:48.020 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.020 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.020 { 00:06:48.020 "subsystems": [ 00:06:48.020 { 00:06:48.020 "subsystem": "fsdev", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "fsdev_set_opts", 00:06:48.020 "params": { 00:06:48.020 "fsdev_io_pool_size": 65535, 00:06:48.020 "fsdev_io_cache_size": 256 00:06:48.020 } 00:06:48.020 } 00:06:48.020 ] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "vfio_user_target", 00:06:48.020 "config": null 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "keyring", 00:06:48.020 "config": [] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "iobuf", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "iobuf_set_options", 00:06:48.020 "params": { 00:06:48.020 "small_pool_count": 8192, 00:06:48.020 "large_pool_count": 1024, 00:06:48.020 "small_bufsize": 8192, 00:06:48.020 "large_bufsize": 135168 00:06:48.020 } 00:06:48.020 } 00:06:48.020 ] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "sock", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "sock_set_default_impl", 00:06:48.020 "params": { 00:06:48.020 "impl_name": "uring" 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "sock_impl_set_options", 00:06:48.020 "params": { 00:06:48.020 "impl_name": "ssl", 00:06:48.020 "recv_buf_size": 4096, 00:06:48.020 "send_buf_size": 4096, 00:06:48.020 "enable_recv_pipe": true, 00:06:48.020 "enable_quickack": false, 00:06:48.020 "enable_placement_id": 0, 00:06:48.020 "enable_zerocopy_send_server": true, 00:06:48.020 "enable_zerocopy_send_client": false, 00:06:48.020 "zerocopy_threshold": 0, 00:06:48.020 "tls_version": 0, 00:06:48.020 "enable_ktls": false 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "sock_impl_set_options", 00:06:48.020 "params": { 00:06:48.020 "impl_name": "posix", 00:06:48.020 "recv_buf_size": 2097152, 00:06:48.020 "send_buf_size": 2097152, 00:06:48.020 "enable_recv_pipe": true, 00:06:48.020 "enable_quickack": false, 00:06:48.020 "enable_placement_id": 0, 00:06:48.020 "enable_zerocopy_send_server": true, 00:06:48.020 "enable_zerocopy_send_client": false, 00:06:48.020 "zerocopy_threshold": 0, 00:06:48.020 "tls_version": 0, 00:06:48.020 "enable_ktls": false 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "sock_impl_set_options", 00:06:48.020 "params": { 00:06:48.020 "impl_name": "uring", 00:06:48.020 "recv_buf_size": 2097152, 00:06:48.020 "send_buf_size": 2097152, 00:06:48.020 "enable_recv_pipe": true, 00:06:48.020 "enable_quickack": false, 00:06:48.020 "enable_placement_id": 0, 00:06:48.020 "enable_zerocopy_send_server": false, 00:06:48.020 "enable_zerocopy_send_client": false, 00:06:48.020 "zerocopy_threshold": 0, 00:06:48.020 "tls_version": 0, 00:06:48.020 "enable_ktls": false 00:06:48.020 } 00:06:48.020 } 00:06:48.020 ] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "vmd", 00:06:48.020 "config": [] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "accel", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "accel_set_options", 00:06:48.020 "params": { 00:06:48.020 "small_cache_size": 128, 00:06:48.020 "large_cache_size": 16, 00:06:48.020 "task_count": 2048, 00:06:48.020 "sequence_count": 2048, 00:06:48.020 "buf_count": 2048 00:06:48.020 } 00:06:48.020 } 00:06:48.020 ] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "bdev", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "bdev_set_options", 00:06:48.020 "params": { 00:06:48.020 "bdev_io_pool_size": 65535, 00:06:48.020 "bdev_io_cache_size": 256, 00:06:48.020 "bdev_auto_examine": true, 00:06:48.020 "iobuf_small_cache_size": 128, 00:06:48.020 "iobuf_large_cache_size": 16 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "bdev_raid_set_options", 00:06:48.020 "params": { 00:06:48.020 "process_window_size_kb": 1024, 00:06:48.020 "process_max_bandwidth_mb_sec": 0 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "bdev_iscsi_set_options", 00:06:48.020 "params": { 00:06:48.020 "timeout_sec": 30 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "bdev_nvme_set_options", 00:06:48.020 "params": { 00:06:48.020 "action_on_timeout": "none", 00:06:48.020 "timeout_us": 0, 00:06:48.020 "timeout_admin_us": 0, 00:06:48.020 "keep_alive_timeout_ms": 10000, 00:06:48.020 "arbitration_burst": 0, 00:06:48.020 "low_priority_weight": 0, 00:06:48.020 "medium_priority_weight": 0, 00:06:48.020 "high_priority_weight": 0, 00:06:48.020 "nvme_adminq_poll_period_us": 10000, 00:06:48.020 "nvme_ioq_poll_period_us": 0, 00:06:48.020 "io_queue_requests": 0, 00:06:48.020 "delay_cmd_submit": true, 00:06:48.020 "transport_retry_count": 4, 00:06:48.020 "bdev_retry_count": 3, 00:06:48.020 "transport_ack_timeout": 0, 00:06:48.020 "ctrlr_loss_timeout_sec": 0, 00:06:48.020 "reconnect_delay_sec": 0, 00:06:48.020 "fast_io_fail_timeout_sec": 0, 00:06:48.020 "disable_auto_failback": false, 00:06:48.020 "generate_uuids": false, 00:06:48.020 "transport_tos": 0, 00:06:48.020 "nvme_error_stat": false, 00:06:48.020 "rdma_srq_size": 0, 00:06:48.020 "io_path_stat": false, 00:06:48.020 "allow_accel_sequence": false, 00:06:48.020 "rdma_max_cq_size": 0, 00:06:48.020 "rdma_cm_event_timeout_ms": 0, 00:06:48.020 "dhchap_digests": [ 00:06:48.020 "sha256", 00:06:48.020 "sha384", 00:06:48.020 "sha512" 00:06:48.020 ], 00:06:48.020 "dhchap_dhgroups": [ 00:06:48.020 "null", 00:06:48.020 "ffdhe2048", 00:06:48.020 "ffdhe3072", 00:06:48.020 "ffdhe4096", 00:06:48.020 "ffdhe6144", 00:06:48.020 "ffdhe8192" 00:06:48.020 ] 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "bdev_nvme_set_hotplug", 00:06:48.020 "params": { 00:06:48.020 "period_us": 100000, 00:06:48.020 "enable": false 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "bdev_wait_for_examine" 00:06:48.020 } 00:06:48.020 ] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "scsi", 00:06:48.020 "config": null 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "scheduler", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "framework_set_scheduler", 00:06:48.020 "params": { 00:06:48.020 "name": "static" 00:06:48.020 } 00:06:48.020 } 00:06:48.020 ] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "vhost_scsi", 00:06:48.020 "config": [] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "vhost_blk", 00:06:48.020 "config": [] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "ublk", 00:06:48.020 "config": [] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "nbd", 00:06:48.020 "config": [] 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "subsystem": "nvmf", 00:06:48.020 "config": [ 00:06:48.020 { 00:06:48.020 "method": "nvmf_set_config", 00:06:48.020 "params": { 00:06:48.020 "discovery_filter": "match_any", 00:06:48.020 "admin_cmd_passthru": { 00:06:48.020 "identify_ctrlr": false 00:06:48.020 }, 00:06:48.020 "dhchap_digests": [ 00:06:48.020 "sha256", 00:06:48.020 "sha384", 00:06:48.020 "sha512" 00:06:48.020 ], 00:06:48.020 "dhchap_dhgroups": [ 00:06:48.020 "null", 00:06:48.020 "ffdhe2048", 00:06:48.020 "ffdhe3072", 00:06:48.020 "ffdhe4096", 00:06:48.020 "ffdhe6144", 00:06:48.020 "ffdhe8192" 00:06:48.020 ] 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "nvmf_set_max_subsystems", 00:06:48.020 "params": { 00:06:48.020 "max_subsystems": 1024 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "nvmf_set_crdt", 00:06:48.020 "params": { 00:06:48.020 "crdt1": 0, 00:06:48.020 "crdt2": 0, 00:06:48.020 "crdt3": 0 00:06:48.020 } 00:06:48.020 }, 00:06:48.020 { 00:06:48.020 "method": "nvmf_create_transport", 00:06:48.021 "params": { 00:06:48.021 "trtype": "TCP", 00:06:48.021 "max_queue_depth": 128, 00:06:48.021 "max_io_qpairs_per_ctrlr": 127, 00:06:48.021 "in_capsule_data_size": 4096, 00:06:48.021 "max_io_size": 131072, 00:06:48.021 "io_unit_size": 131072, 00:06:48.021 "max_aq_depth": 128, 00:06:48.021 "num_shared_buffers": 511, 00:06:48.021 "buf_cache_size": 4294967295, 00:06:48.021 "dif_insert_or_strip": false, 00:06:48.021 "zcopy": false, 00:06:48.021 "c2h_success": true, 00:06:48.021 "sock_priority": 0, 00:06:48.021 "abort_timeout_sec": 1, 00:06:48.021 "ack_timeout": 0, 00:06:48.021 "data_wr_pool_size": 0 00:06:48.021 } 00:06:48.021 } 00:06:48.021 ] 00:06:48.021 }, 00:06:48.021 { 00:06:48.021 "subsystem": "iscsi", 00:06:48.021 "config": [ 00:06:48.021 { 00:06:48.021 "method": "iscsi_set_options", 00:06:48.021 "params": { 00:06:48.021 "node_base": "iqn.2016-06.io.spdk", 00:06:48.021 "max_sessions": 128, 00:06:48.021 "max_connections_per_session": 2, 00:06:48.021 "max_queue_depth": 64, 00:06:48.021 "default_time2wait": 2, 00:06:48.021 "default_time2retain": 20, 00:06:48.021 "first_burst_length": 8192, 00:06:48.021 "immediate_data": true, 00:06:48.021 "allow_duplicated_isid": false, 00:06:48.021 "error_recovery_level": 0, 00:06:48.021 "nop_timeout": 60, 00:06:48.021 "nop_in_interval": 30, 00:06:48.021 "disable_chap": false, 00:06:48.021 "require_chap": false, 00:06:48.021 "mutual_chap": false, 00:06:48.021 "chap_group": 0, 00:06:48.021 "max_large_datain_per_connection": 64, 00:06:48.021 "max_r2t_per_connection": 4, 00:06:48.021 "pdu_pool_size": 36864, 00:06:48.021 "immediate_data_pool_size": 16384, 00:06:48.021 "data_out_pool_size": 2048 00:06:48.021 } 00:06:48.021 } 00:06:48.021 ] 00:06:48.021 } 00:06:48.021 ] 00:06:48.021 } 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70158 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70158 ']' 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70158 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70158 00:06:48.021 killing process with pid 70158 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70158' 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70158 00:06:48.021 10:20:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70158 00:06:48.280 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70173 00:06:48.280 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.280 10:20:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70173 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70173 ']' 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70173 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70173 00:06:53.553 killing process with pid 70173 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70173' 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70173 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70173 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:53.553 00:06:53.553 real 0m6.182s 00:06:53.553 user 0m5.937s 00:06:53.553 sys 0m0.439s 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.553 ************************************ 00:06:53.553 END TEST skip_rpc_with_json 00:06:53.553 ************************************ 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.553 10:20:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:53.553 10:20:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.553 10:20:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.553 10:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.553 ************************************ 00:06:53.553 START TEST skip_rpc_with_delay 00:06:53.553 ************************************ 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:53.553 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.812 [2024-12-10 10:20:28.782079] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:53.812 [2024-12-10 10:20:28.782203] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:53.812 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:53.812 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.812 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.812 ************************************ 00:06:53.812 END TEST skip_rpc_with_delay 00:06:53.812 ************************************ 00:06:53.812 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.812 00:06:53.812 real 0m0.091s 00:06:53.812 user 0m0.058s 00:06:53.812 sys 0m0.033s 00:06:53.812 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.812 10:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:53.812 10:20:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:53.812 10:20:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:53.812 10:20:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:53.812 10:20:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.812 10:20:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.812 10:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.812 ************************************ 00:06:53.812 START TEST exit_on_failed_rpc_init 00:06:53.812 ************************************ 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70282 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70282 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70282 ']' 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.812 10:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.812 [2024-12-10 10:20:28.934051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.812 [2024-12-10 10:20:28.934337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70282 ] 00:06:54.071 [2024-12-10 10:20:29.072515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.071 [2024-12-10 10:20:29.106799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.071 [2024-12-10 10:20:29.143165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:54.071 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:54.330 [2024-12-10 10:20:29.334294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.330 [2024-12-10 10:20:29.334388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70293 ] 00:06:54.330 [2024-12-10 10:20:29.475818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.330 [2024-12-10 10:20:29.519265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.330 [2024-12-10 10:20:29.519378] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:54.330 [2024-12-10 10:20:29.519427] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:54.330 [2024-12-10 10:20:29.519447] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70282 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70282 ']' 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70282 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70282 00:06:54.589 killing process with pid 70282 00:06:54.589 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.590 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.590 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70282' 00:06:54.590 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70282 00:06:54.590 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70282 00:06:54.848 ************************************ 00:06:54.848 END TEST exit_on_failed_rpc_init 00:06:54.849 ************************************ 00:06:54.849 00:06:54.849 real 0m1.016s 00:06:54.849 user 0m1.178s 00:06:54.849 sys 0m0.283s 00:06:54.849 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.849 10:20:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:54.849 10:20:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:54.849 00:06:54.849 real 0m13.011s 00:06:54.849 user 0m12.388s 00:06:54.849 sys 0m1.156s 00:06:54.849 10:20:29 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.849 ************************************ 00:06:54.849 END TEST skip_rpc 00:06:54.849 ************************************ 00:06:54.849 10:20:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.849 10:20:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:54.849 10:20:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.849 10:20:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.849 10:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:54.849 ************************************ 00:06:54.849 START TEST rpc_client 00:06:54.849 ************************************ 00:06:54.849 10:20:29 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:54.849 * Looking for test storage... 00:06:54.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:54.849 10:20:30 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.849 10:20:30 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.849 10:20:30 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.108 10:20:30 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.108 --rc genhtml_branch_coverage=1 00:06:55.108 --rc genhtml_function_coverage=1 00:06:55.108 --rc genhtml_legend=1 00:06:55.108 --rc geninfo_all_blocks=1 00:06:55.108 --rc geninfo_unexecuted_blocks=1 00:06:55.108 00:06:55.108 ' 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.108 --rc genhtml_branch_coverage=1 00:06:55.108 --rc genhtml_function_coverage=1 00:06:55.108 --rc genhtml_legend=1 00:06:55.108 --rc geninfo_all_blocks=1 00:06:55.108 --rc geninfo_unexecuted_blocks=1 00:06:55.108 00:06:55.108 ' 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.108 --rc genhtml_branch_coverage=1 00:06:55.108 --rc genhtml_function_coverage=1 00:06:55.108 --rc genhtml_legend=1 00:06:55.108 --rc geninfo_all_blocks=1 00:06:55.108 --rc geninfo_unexecuted_blocks=1 00:06:55.108 00:06:55.108 ' 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.108 --rc genhtml_branch_coverage=1 00:06:55.108 --rc genhtml_function_coverage=1 00:06:55.108 --rc genhtml_legend=1 00:06:55.108 --rc geninfo_all_blocks=1 00:06:55.108 --rc geninfo_unexecuted_blocks=1 00:06:55.108 00:06:55.108 ' 00:06:55.108 10:20:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:55.108 OK 00:06:55.108 10:20:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:55.108 00:06:55.108 real 0m0.204s 00:06:55.108 user 0m0.142s 00:06:55.108 sys 0m0.076s 00:06:55.108 ************************************ 00:06:55.108 END TEST rpc_client 00:06:55.108 ************************************ 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.108 10:20:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:55.108 10:20:30 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:55.108 10:20:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.108 10:20:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.108 10:20:30 -- common/autotest_common.sh@10 -- # set +x 00:06:55.108 ************************************ 00:06:55.108 START TEST json_config 00:06:55.108 ************************************ 00:06:55.108 10:20:30 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:55.108 10:20:30 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.108 10:20:30 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.108 10:20:30 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.368 10:20:30 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.368 10:20:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.368 10:20:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.368 10:20:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.368 10:20:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.368 10:20:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.368 10:20:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:55.368 10:20:30 json_config -- scripts/common.sh@345 -- # : 1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.368 10:20:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.368 10:20:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@353 -- # local d=1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.368 10:20:30 json_config -- scripts/common.sh@355 -- # echo 1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.368 10:20:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@353 -- # local d=2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.368 10:20:30 json_config -- scripts/common.sh@355 -- # echo 2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.368 10:20:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.368 10:20:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.368 10:20:30 json_config -- scripts/common.sh@368 -- # return 0 00:06:55.368 10:20:30 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.368 10:20:30 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.368 --rc genhtml_branch_coverage=1 00:06:55.368 --rc genhtml_function_coverage=1 00:06:55.368 --rc genhtml_legend=1 00:06:55.368 --rc geninfo_all_blocks=1 00:06:55.368 --rc geninfo_unexecuted_blocks=1 00:06:55.368 00:06:55.368 ' 00:06:55.368 10:20:30 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.368 --rc genhtml_branch_coverage=1 00:06:55.368 --rc genhtml_function_coverage=1 00:06:55.369 --rc genhtml_legend=1 00:06:55.369 --rc geninfo_all_blocks=1 00:06:55.369 --rc geninfo_unexecuted_blocks=1 00:06:55.369 00:06:55.369 ' 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.369 --rc genhtml_branch_coverage=1 00:06:55.369 --rc genhtml_function_coverage=1 00:06:55.369 --rc genhtml_legend=1 00:06:55.369 --rc geninfo_all_blocks=1 00:06:55.369 --rc geninfo_unexecuted_blocks=1 00:06:55.369 00:06:55.369 ' 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.369 --rc genhtml_branch_coverage=1 00:06:55.369 --rc genhtml_function_coverage=1 00:06:55.369 --rc genhtml_legend=1 00:06:55.369 --rc geninfo_all_blocks=1 00:06:55.369 --rc geninfo_unexecuted_blocks=1 00:06:55.369 00:06:55.369 ' 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.369 10:20:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.369 10:20:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.369 10:20:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.369 10:20:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.369 10:20:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.369 10:20:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.369 10:20:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.369 10:20:30 json_config -- paths/export.sh@5 -- # export PATH 00:06:55.369 10:20:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@51 -- # : 0 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.369 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.369 10:20:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:55.369 INFO: JSON configuration test init 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.369 10:20:30 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:55.369 10:20:30 json_config -- json_config/common.sh@9 -- # local app=target 00:06:55.369 10:20:30 json_config -- json_config/common.sh@10 -- # shift 00:06:55.369 10:20:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:55.369 10:20:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:55.369 10:20:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:55.369 10:20:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:55.369 10:20:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:55.369 10:20:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70427 00:06:55.369 10:20:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:55.369 Waiting for target to run... 00:06:55.369 10:20:30 json_config -- json_config/common.sh@25 -- # waitforlisten 70427 /var/tmp/spdk_tgt.sock 00:06:55.369 10:20:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@831 -- # '[' -z 70427 ']' 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:55.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.369 10:20:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.369 [2024-12-10 10:20:30.500724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.369 [2024-12-10 10:20:30.501491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70427 ] 00:06:55.628 [2024-12-10 10:20:30.815083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.628 [2024-12-10 10:20:30.843280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.566 00:06:56.566 10:20:31 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.566 10:20:31 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:56.566 10:20:31 json_config -- json_config/common.sh@26 -- # echo '' 00:06:56.566 10:20:31 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:56.566 10:20:31 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:56.566 10:20:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.566 10:20:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 10:20:31 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:56.566 10:20:31 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:56.566 10:20:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:56.566 10:20:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 10:20:31 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:56.566 10:20:31 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:56.566 10:20:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:56.825 [2024-12-10 10:20:31.845714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:56.825 10:20:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.825 10:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:56.825 10:20:32 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:56.825 10:20:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@54 -- # sort 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:57.393 10:20:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.393 10:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:57.393 10:20:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.393 10:20:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:57.393 10:20:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:57.393 10:20:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:57.652 MallocForNvmf0 00:06:57.652 10:20:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:57.652 10:20:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:57.910 MallocForNvmf1 00:06:57.910 10:20:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:57.910 10:20:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:58.169 [2024-12-10 10:20:33.227007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.169 10:20:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:58.169 10:20:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:58.428 10:20:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:58.428 10:20:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:58.750 10:20:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:58.750 10:20:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:58.750 10:20:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:58.750 10:20:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:59.009 [2024-12-10 10:20:34.227524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:59.267 10:20:34 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:59.267 10:20:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.267 10:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.267 10:20:34 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:59.267 10:20:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.267 10:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.267 10:20:34 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:59.267 10:20:34 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:59.267 10:20:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:59.526 MallocBdevForConfigChangeCheck 00:06:59.526 10:20:34 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:59.526 10:20:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.526 10:20:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.526 10:20:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:59.526 10:20:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:00.105 INFO: shutting down applications... 00:07:00.105 10:20:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:00.105 10:20:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:00.105 10:20:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:00.105 10:20:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:00.105 10:20:35 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:00.362 Calling clear_iscsi_subsystem 00:07:00.362 Calling clear_nvmf_subsystem 00:07:00.362 Calling clear_nbd_subsystem 00:07:00.362 Calling clear_ublk_subsystem 00:07:00.362 Calling clear_vhost_blk_subsystem 00:07:00.362 Calling clear_vhost_scsi_subsystem 00:07:00.362 Calling clear_bdev_subsystem 00:07:00.362 10:20:35 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:00.362 10:20:35 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:00.362 10:20:35 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:00.362 10:20:35 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:00.362 10:20:35 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:00.362 10:20:35 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:00.621 10:20:35 json_config -- json_config/json_config.sh@352 -- # break 00:07:00.621 10:20:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:00.621 10:20:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:00.621 10:20:35 json_config -- json_config/common.sh@31 -- # local app=target 00:07:00.621 10:20:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:00.621 10:20:35 json_config -- json_config/common.sh@35 -- # [[ -n 70427 ]] 00:07:00.621 10:20:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 70427 00:07:00.621 10:20:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:00.621 10:20:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:00.621 10:20:35 json_config -- json_config/common.sh@41 -- # kill -0 70427 00:07:00.621 10:20:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:01.189 10:20:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:01.189 10:20:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:01.189 10:20:36 json_config -- json_config/common.sh@41 -- # kill -0 70427 00:07:01.189 SPDK target shutdown done 00:07:01.189 INFO: relaunching applications... 00:07:01.189 10:20:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:01.189 10:20:36 json_config -- json_config/common.sh@43 -- # break 00:07:01.189 10:20:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:01.189 10:20:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:01.189 10:20:36 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:01.189 10:20:36 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.189 10:20:36 json_config -- json_config/common.sh@9 -- # local app=target 00:07:01.189 10:20:36 json_config -- json_config/common.sh@10 -- # shift 00:07:01.189 10:20:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:01.189 10:20:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:01.189 10:20:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:01.189 10:20:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.189 10:20:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.189 10:20:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70622 00:07:01.189 10:20:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.189 Waiting for target to run... 00:07:01.189 10:20:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:01.189 10:20:36 json_config -- json_config/common.sh@25 -- # waitforlisten 70622 /var/tmp/spdk_tgt.sock 00:07:01.189 10:20:36 json_config -- common/autotest_common.sh@831 -- # '[' -z 70622 ']' 00:07:01.189 10:20:36 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:01.189 10:20:36 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.189 10:20:36 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:01.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:01.189 10:20:36 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.189 10:20:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.189 [2024-12-10 10:20:36.360607] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.189 [2024-12-10 10:20:36.360708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70622 ] 00:07:01.449 [2024-12-10 10:20:36.664884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.708 [2024-12-10 10:20:36.686456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.708 [2024-12-10 10:20:36.814324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.967 [2024-12-10 10:20:37.009264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.967 [2024-12-10 10:20:37.041346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:02.226 00:07:02.226 INFO: Checking if target configuration is the same... 00:07:02.226 10:20:37 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.226 10:20:37 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:02.226 10:20:37 json_config -- json_config/common.sh@26 -- # echo '' 00:07:02.226 10:20:37 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:02.226 10:20:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:02.226 10:20:37 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.226 10:20:37 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:02.226 10:20:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:02.226 + '[' 2 -ne 2 ']' 00:07:02.226 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:02.226 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:02.226 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:02.226 +++ basename /dev/fd/62 00:07:02.226 ++ mktemp /tmp/62.XXX 00:07:02.226 + tmp_file_1=/tmp/62.1cZ 00:07:02.226 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.226 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:02.226 + tmp_file_2=/tmp/spdk_tgt_config.json.UC1 00:07:02.226 + ret=0 00:07:02.226 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.793 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.793 + diff -u /tmp/62.1cZ /tmp/spdk_tgt_config.json.UC1 00:07:02.793 INFO: JSON config files are the same 00:07:02.793 + echo 'INFO: JSON config files are the same' 00:07:02.793 + rm /tmp/62.1cZ /tmp/spdk_tgt_config.json.UC1 00:07:02.793 + exit 0 00:07:02.793 INFO: changing configuration and checking if this can be detected... 00:07:02.793 10:20:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:02.793 10:20:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:02.793 10:20:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:02.793 10:20:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:03.052 10:20:38 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:03.052 10:20:38 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:03.052 10:20:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:03.052 + '[' 2 -ne 2 ']' 00:07:03.052 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:03.052 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:03.052 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:03.052 +++ basename /dev/fd/62 00:07:03.052 ++ mktemp /tmp/62.XXX 00:07:03.052 + tmp_file_1=/tmp/62.YQ4 00:07:03.052 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:03.052 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:03.052 + tmp_file_2=/tmp/spdk_tgt_config.json.G6z 00:07:03.052 + ret=0 00:07:03.052 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:03.619 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:03.619 + diff -u /tmp/62.YQ4 /tmp/spdk_tgt_config.json.G6z 00:07:03.619 + ret=1 00:07:03.619 + echo '=== Start of file: /tmp/62.YQ4 ===' 00:07:03.619 + cat /tmp/62.YQ4 00:07:03.619 + echo '=== End of file: /tmp/62.YQ4 ===' 00:07:03.619 + echo '' 00:07:03.619 + echo '=== Start of file: /tmp/spdk_tgt_config.json.G6z ===' 00:07:03.619 + cat /tmp/spdk_tgt_config.json.G6z 00:07:03.619 + echo '=== End of file: /tmp/spdk_tgt_config.json.G6z ===' 00:07:03.619 + echo '' 00:07:03.619 + rm /tmp/62.YQ4 /tmp/spdk_tgt_config.json.G6z 00:07:03.619 + exit 1 00:07:03.619 INFO: configuration change detected. 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 70622 ]] 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.620 10:20:38 json_config -- json_config/json_config.sh@330 -- # killprocess 70622 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@950 -- # '[' -z 70622 ']' 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@954 -- # kill -0 70622 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@955 -- # uname 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70622 00:07:03.620 killing process with pid 70622 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70622' 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@969 -- # kill 70622 00:07:03.620 10:20:38 json_config -- common/autotest_common.sh@974 -- # wait 70622 00:07:03.879 10:20:38 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:03.879 10:20:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:03.879 10:20:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.879 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.879 10:20:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:03.879 INFO: Success 00:07:03.879 10:20:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:03.879 ************************************ 00:07:03.879 END TEST json_config 00:07:03.879 ************************************ 00:07:03.879 00:07:03.879 real 0m8.685s 00:07:03.879 user 0m12.724s 00:07:03.879 sys 0m1.455s 00:07:03.879 10:20:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.879 10:20:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.879 10:20:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:03.879 10:20:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.879 10:20:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.879 10:20:38 -- common/autotest_common.sh@10 -- # set +x 00:07:03.879 ************************************ 00:07:03.879 START TEST json_config_extra_key 00:07:03.879 ************************************ 00:07:03.879 10:20:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:03.879 10:20:39 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.879 10:20:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.879 10:20:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.879 10:20:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.879 10:20:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.880 10:20:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:04.139 10:20:39 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.139 10:20:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.139 --rc genhtml_branch_coverage=1 00:07:04.139 --rc genhtml_function_coverage=1 00:07:04.139 --rc genhtml_legend=1 00:07:04.139 --rc geninfo_all_blocks=1 00:07:04.139 --rc geninfo_unexecuted_blocks=1 00:07:04.139 00:07:04.139 ' 00:07:04.139 10:20:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.139 --rc genhtml_branch_coverage=1 00:07:04.139 --rc genhtml_function_coverage=1 00:07:04.139 --rc genhtml_legend=1 00:07:04.139 --rc geninfo_all_blocks=1 00:07:04.139 --rc geninfo_unexecuted_blocks=1 00:07:04.139 00:07:04.139 ' 00:07:04.139 10:20:39 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.139 --rc genhtml_branch_coverage=1 00:07:04.139 --rc genhtml_function_coverage=1 00:07:04.139 --rc genhtml_legend=1 00:07:04.139 --rc geninfo_all_blocks=1 00:07:04.140 --rc geninfo_unexecuted_blocks=1 00:07:04.140 00:07:04.140 ' 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.140 --rc genhtml_branch_coverage=1 00:07:04.140 --rc genhtml_function_coverage=1 00:07:04.140 --rc genhtml_legend=1 00:07:04.140 --rc geninfo_all_blocks=1 00:07:04.140 --rc geninfo_unexecuted_blocks=1 00:07:04.140 00:07:04.140 ' 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.140 10:20:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.140 10:20:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.140 10:20:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.140 10:20:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.140 10:20:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.140 10:20:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.140 10:20:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.140 10:20:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:04.140 10:20:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.140 10:20:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.140 INFO: launching applications... 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:04.140 10:20:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70776 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:04.140 Waiting for target to run... 00:07:04.140 10:20:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70776 /var/tmp/spdk_tgt.sock 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70776 ']' 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:04.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.140 10:20:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:04.140 [2024-12-10 10:20:39.204694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.140 [2024-12-10 10:20:39.204789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70776 ] 00:07:04.400 [2024-12-10 10:20:39.505986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.400 [2024-12-10 10:20:39.529065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.400 [2024-12-10 10:20:39.551183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.335 00:07:05.335 INFO: shutting down applications... 00:07:05.335 10:20:40 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.335 10:20:40 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:05.335 10:20:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:05.335 10:20:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:05.336 10:20:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70776 ]] 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70776 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70776 00:07:05.336 10:20:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70776 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:05.594 10:20:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:05.594 SPDK target shutdown done 00:07:05.594 10:20:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:05.594 Success 00:07:05.594 00:07:05.594 real 0m1.782s 00:07:05.594 user 0m1.666s 00:07:05.594 sys 0m0.307s 00:07:05.594 ************************************ 00:07:05.594 END TEST json_config_extra_key 00:07:05.594 ************************************ 00:07:05.594 10:20:40 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.594 10:20:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:05.594 10:20:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.594 10:20:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.594 10:20:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.594 10:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:05.594 ************************************ 00:07:05.594 START TEST alias_rpc 00:07:05.594 ************************************ 00:07:05.594 10:20:40 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.853 * Looking for test storage... 00:07:05.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.853 10:20:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.853 --rc genhtml_branch_coverage=1 00:07:05.853 --rc genhtml_function_coverage=1 00:07:05.853 --rc genhtml_legend=1 00:07:05.853 --rc geninfo_all_blocks=1 00:07:05.853 --rc geninfo_unexecuted_blocks=1 00:07:05.853 00:07:05.853 ' 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.853 --rc genhtml_branch_coverage=1 00:07:05.853 --rc genhtml_function_coverage=1 00:07:05.853 --rc genhtml_legend=1 00:07:05.853 --rc geninfo_all_blocks=1 00:07:05.853 --rc geninfo_unexecuted_blocks=1 00:07:05.853 00:07:05.853 ' 00:07:05.853 10:20:40 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.853 --rc genhtml_branch_coverage=1 00:07:05.853 --rc genhtml_function_coverage=1 00:07:05.854 --rc genhtml_legend=1 00:07:05.854 --rc geninfo_all_blocks=1 00:07:05.854 --rc geninfo_unexecuted_blocks=1 00:07:05.854 00:07:05.854 ' 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.854 --rc genhtml_branch_coverage=1 00:07:05.854 --rc genhtml_function_coverage=1 00:07:05.854 --rc genhtml_legend=1 00:07:05.854 --rc geninfo_all_blocks=1 00:07:05.854 --rc geninfo_unexecuted_blocks=1 00:07:05.854 00:07:05.854 ' 00:07:05.854 10:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.854 10:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70849 00:07:05.854 10:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70849 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70849 ']' 00:07:05.854 10:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.854 10:20:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.854 [2024-12-10 10:20:41.056760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.854 [2024-12-10 10:20:41.057082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:07:06.113 [2024-12-10 10:20:41.196175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.113 [2024-12-10 10:20:41.230577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.113 [2024-12-10 10:20:41.266371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.372 10:20:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.372 10:20:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.372 10:20:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:06.630 10:20:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70849 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70849 ']' 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70849 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70849 00:07:06.630 killing process with pid 70849 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70849' 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@969 -- # kill 70849 00:07:06.630 10:20:41 alias_rpc -- common/autotest_common.sh@974 -- # wait 70849 00:07:06.890 ************************************ 00:07:06.890 END TEST alias_rpc 00:07:06.890 ************************************ 00:07:06.890 00:07:06.890 real 0m1.162s 00:07:06.890 user 0m1.346s 00:07:06.890 sys 0m0.318s 00:07:06.890 10:20:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.890 10:20:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.890 10:20:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:06.890 10:20:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:06.890 10:20:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.890 10:20:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.890 10:20:41 -- common/autotest_common.sh@10 -- # set +x 00:07:06.890 ************************************ 00:07:06.890 START TEST spdkcli_tcp 00:07:06.890 ************************************ 00:07:06.890 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:06.890 * Looking for test storage... 00:07:06.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:06.890 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.890 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.890 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.149 10:20:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.149 --rc genhtml_branch_coverage=1 00:07:07.149 --rc genhtml_function_coverage=1 00:07:07.149 --rc genhtml_legend=1 00:07:07.149 --rc geninfo_all_blocks=1 00:07:07.149 --rc geninfo_unexecuted_blocks=1 00:07:07.149 00:07:07.149 ' 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.149 --rc genhtml_branch_coverage=1 00:07:07.149 --rc genhtml_function_coverage=1 00:07:07.149 --rc genhtml_legend=1 00:07:07.149 --rc geninfo_all_blocks=1 00:07:07.149 --rc geninfo_unexecuted_blocks=1 00:07:07.149 00:07:07.149 ' 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.149 --rc genhtml_branch_coverage=1 00:07:07.149 --rc genhtml_function_coverage=1 00:07:07.149 --rc genhtml_legend=1 00:07:07.149 --rc geninfo_all_blocks=1 00:07:07.149 --rc geninfo_unexecuted_blocks=1 00:07:07.149 00:07:07.149 ' 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.149 --rc genhtml_branch_coverage=1 00:07:07.149 --rc genhtml_function_coverage=1 00:07:07.149 --rc genhtml_legend=1 00:07:07.149 --rc geninfo_all_blocks=1 00:07:07.149 --rc geninfo_unexecuted_blocks=1 00:07:07.149 00:07:07.149 ' 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70925 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:07.149 10:20:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70925 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70925 ']' 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.149 10:20:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.149 [2024-12-10 10:20:42.256340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.149 [2024-12-10 10:20:42.256687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70925 ] 00:07:07.408 [2024-12-10 10:20:42.391992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.408 [2024-12-10 10:20:42.426026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.408 [2024-12-10 10:20:42.426035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.409 [2024-12-10 10:20:42.462472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.346 10:20:43 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.346 10:20:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:08.346 10:20:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:08.346 10:20:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70942 00:07:08.346 10:20:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:08.346 [ 00:07:08.346 "bdev_malloc_delete", 00:07:08.346 "bdev_malloc_create", 00:07:08.346 "bdev_null_resize", 00:07:08.346 "bdev_null_delete", 00:07:08.346 "bdev_null_create", 00:07:08.346 "bdev_nvme_cuse_unregister", 00:07:08.346 "bdev_nvme_cuse_register", 00:07:08.346 "bdev_opal_new_user", 00:07:08.346 "bdev_opal_set_lock_state", 00:07:08.346 "bdev_opal_delete", 00:07:08.346 "bdev_opal_get_info", 00:07:08.346 "bdev_opal_create", 00:07:08.346 "bdev_nvme_opal_revert", 00:07:08.346 "bdev_nvme_opal_init", 00:07:08.346 "bdev_nvme_send_cmd", 00:07:08.346 "bdev_nvme_set_keys", 00:07:08.346 "bdev_nvme_get_path_iostat", 00:07:08.346 "bdev_nvme_get_mdns_discovery_info", 00:07:08.346 "bdev_nvme_stop_mdns_discovery", 00:07:08.346 "bdev_nvme_start_mdns_discovery", 00:07:08.346 "bdev_nvme_set_multipath_policy", 00:07:08.346 "bdev_nvme_set_preferred_path", 00:07:08.346 "bdev_nvme_get_io_paths", 00:07:08.346 "bdev_nvme_remove_error_injection", 00:07:08.346 "bdev_nvme_add_error_injection", 00:07:08.346 "bdev_nvme_get_discovery_info", 00:07:08.346 "bdev_nvme_stop_discovery", 00:07:08.346 "bdev_nvme_start_discovery", 00:07:08.346 "bdev_nvme_get_controller_health_info", 00:07:08.346 "bdev_nvme_disable_controller", 00:07:08.346 "bdev_nvme_enable_controller", 00:07:08.346 "bdev_nvme_reset_controller", 00:07:08.346 "bdev_nvme_get_transport_statistics", 00:07:08.346 "bdev_nvme_apply_firmware", 00:07:08.346 "bdev_nvme_detach_controller", 00:07:08.346 "bdev_nvme_get_controllers", 00:07:08.346 "bdev_nvme_attach_controller", 00:07:08.346 "bdev_nvme_set_hotplug", 00:07:08.346 "bdev_nvme_set_options", 00:07:08.346 "bdev_passthru_delete", 00:07:08.346 "bdev_passthru_create", 00:07:08.346 "bdev_lvol_set_parent_bdev", 00:07:08.346 "bdev_lvol_set_parent", 00:07:08.346 "bdev_lvol_check_shallow_copy", 00:07:08.346 "bdev_lvol_start_shallow_copy", 00:07:08.346 "bdev_lvol_grow_lvstore", 00:07:08.346 "bdev_lvol_get_lvols", 00:07:08.346 "bdev_lvol_get_lvstores", 00:07:08.346 "bdev_lvol_delete", 00:07:08.346 "bdev_lvol_set_read_only", 00:07:08.346 "bdev_lvol_resize", 00:07:08.346 "bdev_lvol_decouple_parent", 00:07:08.346 "bdev_lvol_inflate", 00:07:08.346 "bdev_lvol_rename", 00:07:08.346 "bdev_lvol_clone_bdev", 00:07:08.346 "bdev_lvol_clone", 00:07:08.346 "bdev_lvol_snapshot", 00:07:08.346 "bdev_lvol_create", 00:07:08.346 "bdev_lvol_delete_lvstore", 00:07:08.346 "bdev_lvol_rename_lvstore", 00:07:08.346 "bdev_lvol_create_lvstore", 00:07:08.346 "bdev_raid_set_options", 00:07:08.346 "bdev_raid_remove_base_bdev", 00:07:08.346 "bdev_raid_add_base_bdev", 00:07:08.346 "bdev_raid_delete", 00:07:08.346 "bdev_raid_create", 00:07:08.346 "bdev_raid_get_bdevs", 00:07:08.346 "bdev_error_inject_error", 00:07:08.346 "bdev_error_delete", 00:07:08.346 "bdev_error_create", 00:07:08.346 "bdev_split_delete", 00:07:08.346 "bdev_split_create", 00:07:08.346 "bdev_delay_delete", 00:07:08.346 "bdev_delay_create", 00:07:08.346 "bdev_delay_update_latency", 00:07:08.346 "bdev_zone_block_delete", 00:07:08.346 "bdev_zone_block_create", 00:07:08.346 "blobfs_create", 00:07:08.346 "blobfs_detect", 00:07:08.346 "blobfs_set_cache_size", 00:07:08.346 "bdev_aio_delete", 00:07:08.346 "bdev_aio_rescan", 00:07:08.346 "bdev_aio_create", 00:07:08.346 "bdev_ftl_set_property", 00:07:08.346 "bdev_ftl_get_properties", 00:07:08.346 "bdev_ftl_get_stats", 00:07:08.346 "bdev_ftl_unmap", 00:07:08.346 "bdev_ftl_unload", 00:07:08.346 "bdev_ftl_delete", 00:07:08.346 "bdev_ftl_load", 00:07:08.346 "bdev_ftl_create", 00:07:08.346 "bdev_virtio_attach_controller", 00:07:08.346 "bdev_virtio_scsi_get_devices", 00:07:08.346 "bdev_virtio_detach_controller", 00:07:08.346 "bdev_virtio_blk_set_hotplug", 00:07:08.346 "bdev_iscsi_delete", 00:07:08.346 "bdev_iscsi_create", 00:07:08.346 "bdev_iscsi_set_options", 00:07:08.346 "bdev_uring_delete", 00:07:08.346 "bdev_uring_rescan", 00:07:08.346 "bdev_uring_create", 00:07:08.346 "accel_error_inject_error", 00:07:08.346 "ioat_scan_accel_module", 00:07:08.346 "dsa_scan_accel_module", 00:07:08.346 "iaa_scan_accel_module", 00:07:08.346 "vfu_virtio_create_fs_endpoint", 00:07:08.346 "vfu_virtio_create_scsi_endpoint", 00:07:08.346 "vfu_virtio_scsi_remove_target", 00:07:08.346 "vfu_virtio_scsi_add_target", 00:07:08.346 "vfu_virtio_create_blk_endpoint", 00:07:08.346 "vfu_virtio_delete_endpoint", 00:07:08.346 "keyring_file_remove_key", 00:07:08.346 "keyring_file_add_key", 00:07:08.346 "keyring_linux_set_options", 00:07:08.346 "fsdev_aio_delete", 00:07:08.346 "fsdev_aio_create", 00:07:08.346 "iscsi_get_histogram", 00:07:08.347 "iscsi_enable_histogram", 00:07:08.347 "iscsi_set_options", 00:07:08.347 "iscsi_get_auth_groups", 00:07:08.347 "iscsi_auth_group_remove_secret", 00:07:08.347 "iscsi_auth_group_add_secret", 00:07:08.347 "iscsi_delete_auth_group", 00:07:08.347 "iscsi_create_auth_group", 00:07:08.347 "iscsi_set_discovery_auth", 00:07:08.347 "iscsi_get_options", 00:07:08.347 "iscsi_target_node_request_logout", 00:07:08.347 "iscsi_target_node_set_redirect", 00:07:08.347 "iscsi_target_node_set_auth", 00:07:08.347 "iscsi_target_node_add_lun", 00:07:08.347 "iscsi_get_stats", 00:07:08.347 "iscsi_get_connections", 00:07:08.347 "iscsi_portal_group_set_auth", 00:07:08.347 "iscsi_start_portal_group", 00:07:08.347 "iscsi_delete_portal_group", 00:07:08.347 "iscsi_create_portal_group", 00:07:08.347 "iscsi_get_portal_groups", 00:07:08.347 "iscsi_delete_target_node", 00:07:08.347 "iscsi_target_node_remove_pg_ig_maps", 00:07:08.347 "iscsi_target_node_add_pg_ig_maps", 00:07:08.347 "iscsi_create_target_node", 00:07:08.347 "iscsi_get_target_nodes", 00:07:08.347 "iscsi_delete_initiator_group", 00:07:08.347 "iscsi_initiator_group_remove_initiators", 00:07:08.347 "iscsi_initiator_group_add_initiators", 00:07:08.347 "iscsi_create_initiator_group", 00:07:08.347 "iscsi_get_initiator_groups", 00:07:08.347 "nvmf_set_crdt", 00:07:08.347 "nvmf_set_config", 00:07:08.347 "nvmf_set_max_subsystems", 00:07:08.347 "nvmf_stop_mdns_prr", 00:07:08.347 "nvmf_publish_mdns_prr", 00:07:08.347 "nvmf_subsystem_get_listeners", 00:07:08.347 "nvmf_subsystem_get_qpairs", 00:07:08.347 "nvmf_subsystem_get_controllers", 00:07:08.347 "nvmf_get_stats", 00:07:08.347 "nvmf_get_transports", 00:07:08.347 "nvmf_create_transport", 00:07:08.347 "nvmf_get_targets", 00:07:08.347 "nvmf_delete_target", 00:07:08.347 "nvmf_create_target", 00:07:08.347 "nvmf_subsystem_allow_any_host", 00:07:08.347 "nvmf_subsystem_set_keys", 00:07:08.347 "nvmf_subsystem_remove_host", 00:07:08.347 "nvmf_subsystem_add_host", 00:07:08.347 "nvmf_ns_remove_host", 00:07:08.347 "nvmf_ns_add_host", 00:07:08.347 "nvmf_subsystem_remove_ns", 00:07:08.347 "nvmf_subsystem_set_ns_ana_group", 00:07:08.347 "nvmf_subsystem_add_ns", 00:07:08.347 "nvmf_subsystem_listener_set_ana_state", 00:07:08.347 "nvmf_discovery_get_referrals", 00:07:08.347 "nvmf_discovery_remove_referral", 00:07:08.347 "nvmf_discovery_add_referral", 00:07:08.347 "nvmf_subsystem_remove_listener", 00:07:08.347 "nvmf_subsystem_add_listener", 00:07:08.347 "nvmf_delete_subsystem", 00:07:08.347 "nvmf_create_subsystem", 00:07:08.347 "nvmf_get_subsystems", 00:07:08.347 "env_dpdk_get_mem_stats", 00:07:08.347 "nbd_get_disks", 00:07:08.347 "nbd_stop_disk", 00:07:08.347 "nbd_start_disk", 00:07:08.347 "ublk_recover_disk", 00:07:08.347 "ublk_get_disks", 00:07:08.347 "ublk_stop_disk", 00:07:08.347 "ublk_start_disk", 00:07:08.347 "ublk_destroy_target", 00:07:08.347 "ublk_create_target", 00:07:08.347 "virtio_blk_create_transport", 00:07:08.347 "virtio_blk_get_transports", 00:07:08.347 "vhost_controller_set_coalescing", 00:07:08.347 "vhost_get_controllers", 00:07:08.347 "vhost_delete_controller", 00:07:08.347 "vhost_create_blk_controller", 00:07:08.347 "vhost_scsi_controller_remove_target", 00:07:08.347 "vhost_scsi_controller_add_target", 00:07:08.347 "vhost_start_scsi_controller", 00:07:08.347 "vhost_create_scsi_controller", 00:07:08.347 "thread_set_cpumask", 00:07:08.347 "scheduler_set_options", 00:07:08.347 "framework_get_governor", 00:07:08.347 "framework_get_scheduler", 00:07:08.347 "framework_set_scheduler", 00:07:08.347 "framework_get_reactors", 00:07:08.347 "thread_get_io_channels", 00:07:08.347 "thread_get_pollers", 00:07:08.347 "thread_get_stats", 00:07:08.347 "framework_monitor_context_switch", 00:07:08.347 "spdk_kill_instance", 00:07:08.347 "log_enable_timestamps", 00:07:08.347 "log_get_flags", 00:07:08.347 "log_clear_flag", 00:07:08.347 "log_set_flag", 00:07:08.347 "log_get_level", 00:07:08.347 "log_set_level", 00:07:08.347 "log_get_print_level", 00:07:08.347 "log_set_print_level", 00:07:08.347 "framework_enable_cpumask_locks", 00:07:08.347 "framework_disable_cpumask_locks", 00:07:08.347 "framework_wait_init", 00:07:08.347 "framework_start_init", 00:07:08.347 "scsi_get_devices", 00:07:08.347 "bdev_get_histogram", 00:07:08.347 "bdev_enable_histogram", 00:07:08.347 "bdev_set_qos_limit", 00:07:08.347 "bdev_set_qd_sampling_period", 00:07:08.347 "bdev_get_bdevs", 00:07:08.347 "bdev_reset_iostat", 00:07:08.347 "bdev_get_iostat", 00:07:08.347 "bdev_examine", 00:07:08.347 "bdev_wait_for_examine", 00:07:08.347 "bdev_set_options", 00:07:08.347 "accel_get_stats", 00:07:08.347 "accel_set_options", 00:07:08.347 "accel_set_driver", 00:07:08.347 "accel_crypto_key_destroy", 00:07:08.347 "accel_crypto_keys_get", 00:07:08.347 "accel_crypto_key_create", 00:07:08.347 "accel_assign_opc", 00:07:08.347 "accel_get_module_info", 00:07:08.347 "accel_get_opc_assignments", 00:07:08.347 "vmd_rescan", 00:07:08.347 "vmd_remove_device", 00:07:08.347 "vmd_enable", 00:07:08.347 "sock_get_default_impl", 00:07:08.347 "sock_set_default_impl", 00:07:08.347 "sock_impl_set_options", 00:07:08.347 "sock_impl_get_options", 00:07:08.347 "iobuf_get_stats", 00:07:08.347 "iobuf_set_options", 00:07:08.347 "keyring_get_keys", 00:07:08.347 "vfu_tgt_set_base_path", 00:07:08.347 "framework_get_pci_devices", 00:07:08.347 "framework_get_config", 00:07:08.347 "framework_get_subsystems", 00:07:08.347 "fsdev_set_opts", 00:07:08.347 "fsdev_get_opts", 00:07:08.347 "trace_get_info", 00:07:08.347 "trace_get_tpoint_group_mask", 00:07:08.347 "trace_disable_tpoint_group", 00:07:08.347 "trace_enable_tpoint_group", 00:07:08.347 "trace_clear_tpoint_mask", 00:07:08.347 "trace_set_tpoint_mask", 00:07:08.347 "notify_get_notifications", 00:07:08.347 "notify_get_types", 00:07:08.347 "spdk_get_version", 00:07:08.347 "rpc_get_methods" 00:07:08.347 ] 00:07:08.347 10:20:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.347 10:20:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:08.347 10:20:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70925 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70925 ']' 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70925 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.347 10:20:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70925 00:07:08.606 killing process with pid 70925 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70925' 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70925 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70925 00:07:08.606 ************************************ 00:07:08.606 END TEST spdkcli_tcp 00:07:08.606 ************************************ 00:07:08.606 00:07:08.606 real 0m1.819s 00:07:08.606 user 0m3.495s 00:07:08.606 sys 0m0.404s 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.606 10:20:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 10:20:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:08.865 10:20:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.865 10:20:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.865 10:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 ************************************ 00:07:08.865 START TEST dpdk_mem_utility 00:07:08.865 ************************************ 00:07:08.865 10:20:43 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:08.865 * Looking for test storage... 00:07:08.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:08.865 10:20:43 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.865 10:20:43 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.865 10:20:43 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.865 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:08.865 10:20:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:08.866 10:20:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.866 10:20:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:08.866 10:20:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.866 10:20:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.866 10:20:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.866 10:20:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.866 --rc genhtml_branch_coverage=1 00:07:08.866 --rc genhtml_function_coverage=1 00:07:08.866 --rc genhtml_legend=1 00:07:08.866 --rc geninfo_all_blocks=1 00:07:08.866 --rc geninfo_unexecuted_blocks=1 00:07:08.866 00:07:08.866 ' 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.866 --rc genhtml_branch_coverage=1 00:07:08.866 --rc genhtml_function_coverage=1 00:07:08.866 --rc genhtml_legend=1 00:07:08.866 --rc geninfo_all_blocks=1 00:07:08.866 --rc geninfo_unexecuted_blocks=1 00:07:08.866 00:07:08.866 ' 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.866 --rc genhtml_branch_coverage=1 00:07:08.866 --rc genhtml_function_coverage=1 00:07:08.866 --rc genhtml_legend=1 00:07:08.866 --rc geninfo_all_blocks=1 00:07:08.866 --rc geninfo_unexecuted_blocks=1 00:07:08.866 00:07:08.866 ' 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.866 --rc genhtml_branch_coverage=1 00:07:08.866 --rc genhtml_function_coverage=1 00:07:08.866 --rc genhtml_legend=1 00:07:08.866 --rc geninfo_all_blocks=1 00:07:08.866 --rc geninfo_unexecuted_blocks=1 00:07:08.866 00:07:08.866 ' 00:07:08.866 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:08.866 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71019 00:07:08.866 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.866 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71019 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71019 ']' 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.866 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.125 [2024-12-10 10:20:44.133145] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.125 [2024-12-10 10:20:44.133681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71019 ] 00:07:09.125 [2024-12-10 10:20:44.272294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.125 [2024-12-10 10:20:44.305942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.125 [2024-12-10 10:20:44.341287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.386 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.386 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:09.386 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:09.386 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:09.386 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.386 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 { 00:07:09.386 "filename": "/tmp/spdk_mem_dump.txt" 00:07:09.386 } 00:07:09.386 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.386 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:09.386 DPDK memory size 860.000000 MiB in 1 heap(s) 00:07:09.386 1 heaps totaling size 860.000000 MiB 00:07:09.386 size: 860.000000 MiB heap id: 0 00:07:09.386 end heaps---------- 00:07:09.386 9 mempools totaling size 642.649841 MiB 00:07:09.386 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:09.386 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:09.386 size: 92.545471 MiB name: bdev_io_71019 00:07:09.386 size: 51.011292 MiB name: evtpool_71019 00:07:09.386 size: 50.003479 MiB name: msgpool_71019 00:07:09.386 size: 36.509338 MiB name: fsdev_io_71019 00:07:09.386 size: 21.763794 MiB name: PDU_Pool 00:07:09.386 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:09.386 size: 0.026123 MiB name: Session_Pool 00:07:09.386 end mempools------- 00:07:09.386 6 memzones totaling size 4.142822 MiB 00:07:09.386 size: 1.000366 MiB name: RG_ring_0_71019 00:07:09.386 size: 1.000366 MiB name: RG_ring_1_71019 00:07:09.386 size: 1.000366 MiB name: RG_ring_4_71019 00:07:09.386 size: 1.000366 MiB name: RG_ring_5_71019 00:07:09.386 size: 0.125366 MiB name: RG_ring_2_71019 00:07:09.386 size: 0.015991 MiB name: RG_ring_3_71019 00:07:09.386 end memzones------- 00:07:09.386 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:09.386 heap id: 0 total size: 860.000000 MiB number of busy elements: 305 number of free elements: 16 00:07:09.386 list of free elements. size: 13.936890 MiB 00:07:09.386 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:09.386 element at address: 0x200000800000 with size: 1.996948 MiB 00:07:09.386 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:07:09.386 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:09.386 element at address: 0x200034a00000 with size: 0.994446 MiB 00:07:09.386 element at address: 0x200009600000 with size: 0.959839 MiB 00:07:09.386 element at address: 0x200015e00000 with size: 0.954285 MiB 00:07:09.386 element at address: 0x20001c000000 with size: 0.936584 MiB 00:07:09.386 element at address: 0x200000200000 with size: 0.834839 MiB 00:07:09.386 element at address: 0x20001d800000 with size: 0.567505 MiB 00:07:09.386 element at address: 0x20000d800000 with size: 0.489258 MiB 00:07:09.386 element at address: 0x200003e00000 with size: 0.488647 MiB 00:07:09.386 element at address: 0x20001c200000 with size: 0.485657 MiB 00:07:09.386 element at address: 0x200007000000 with size: 0.480469 MiB 00:07:09.386 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:07:09.386 element at address: 0x200003a00000 with size: 0.353027 MiB 00:07:09.386 list of standard malloc elements. size: 199.266418 MiB 00:07:09.386 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:07:09.386 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:07:09.386 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:07:09.386 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:09.386 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:09.386 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:09.386 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:07:09.386 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:09.386 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:07:09.386 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003aff880 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:07:09.386 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b000 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b180 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b240 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b300 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b480 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b540 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b600 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:07:09.387 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891480 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891540 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891600 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891780 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891840 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891900 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892080 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892140 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892200 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892380 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892440 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892500 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892680 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892740 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892800 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892980 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893040 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893100 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893280 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893340 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893400 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893580 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893640 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893700 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893880 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893940 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894000 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894180 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894240 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894300 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894480 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894540 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894600 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894780 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894840 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894900 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d895080 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d895140 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d895200 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d895380 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20001d895440 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:07:09.387 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:07:09.388 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:07:09.388 list of memzone associated elements. size: 646.796692 MiB 00:07:09.388 element at address: 0x20001d895500 with size: 211.416748 MiB 00:07:09.388 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:09.388 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:07:09.388 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:09.388 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:07:09.388 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71019_0 00:07:09.388 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:09.388 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71019_0 00:07:09.388 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:09.388 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71019_0 00:07:09.388 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:07:09.388 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71019_0 00:07:09.388 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:07:09.388 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:09.388 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:07:09.388 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:09.388 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:09.388 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71019 00:07:09.388 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:09.388 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71019 00:07:09.388 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:09.388 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71019 00:07:09.388 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:07:09.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:09.388 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:07:09.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:09.388 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:07:09.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:09.388 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:07:09.388 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:09.388 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:09.388 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71019 00:07:09.388 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:09.388 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71019 00:07:09.388 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:07:09.388 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71019 00:07:09.388 element at address: 0x200034afe940 with size: 1.000488 MiB 00:07:09.388 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71019 00:07:09.388 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:07:09.388 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71019 00:07:09.388 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:07:09.388 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71019 00:07:09.388 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:07:09.388 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:09.388 element at address: 0x20000707b780 with size: 0.500488 MiB 00:07:09.388 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:09.388 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:07:09.388 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:09.388 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:07:09.388 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71019 00:07:09.388 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:07:09.388 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:09.388 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:07:09.388 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:09.388 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:07:09.388 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71019 00:07:09.388 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:07:09.388 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:09.388 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:09.388 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71019 00:07:09.388 element at address: 0x200003aff940 with size: 0.000305 MiB 00:07:09.388 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71019 00:07:09.388 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:07:09.388 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71019 00:07:09.388 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:07:09.388 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:09.388 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:09.388 10:20:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71019 00:07:09.388 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71019 ']' 00:07:09.388 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71019 00:07:09.388 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:09.388 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.388 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71019 00:07:09.647 killing process with pid 71019 00:07:09.647 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.647 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.647 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71019' 00:07:09.647 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71019 00:07:09.647 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71019 00:07:09.906 00:07:09.906 real 0m1.016s 00:07:09.906 user 0m1.081s 00:07:09.906 sys 0m0.301s 00:07:09.906 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.906 ************************************ 00:07:09.906 END TEST dpdk_mem_utility 00:07:09.906 ************************************ 00:07:09.906 10:20:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.906 10:20:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:09.906 10:20:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.906 10:20:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.906 10:20:44 -- common/autotest_common.sh@10 -- # set +x 00:07:09.906 ************************************ 00:07:09.906 START TEST event 00:07:09.906 ************************************ 00:07:09.906 10:20:44 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:09.906 * Looking for test storage... 00:07:09.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:09.906 10:20:45 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.906 10:20:45 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.906 10:20:45 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:09.906 10:20:45 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:09.906 10:20:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.906 10:20:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.906 10:20:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.906 10:20:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.906 10:20:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.906 10:20:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.906 10:20:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.906 10:20:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.906 10:20:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.906 10:20:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.906 10:20:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.906 10:20:45 event -- scripts/common.sh@344 -- # case "$op" in 00:07:09.906 10:20:45 event -- scripts/common.sh@345 -- # : 1 00:07:09.906 10:20:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.906 10:20:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.906 10:20:45 event -- scripts/common.sh@365 -- # decimal 1 00:07:09.906 10:20:45 event -- scripts/common.sh@353 -- # local d=1 00:07:09.906 10:20:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.906 10:20:45 event -- scripts/common.sh@355 -- # echo 1 00:07:09.906 10:20:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.164 10:20:45 event -- scripts/common.sh@366 -- # decimal 2 00:07:10.164 10:20:45 event -- scripts/common.sh@353 -- # local d=2 00:07:10.164 10:20:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.164 10:20:45 event -- scripts/common.sh@355 -- # echo 2 00:07:10.164 10:20:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.164 10:20:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.164 10:20:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.164 10:20:45 event -- scripts/common.sh@368 -- # return 0 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.164 --rc genhtml_branch_coverage=1 00:07:10.164 --rc genhtml_function_coverage=1 00:07:10.164 --rc genhtml_legend=1 00:07:10.164 --rc geninfo_all_blocks=1 00:07:10.164 --rc geninfo_unexecuted_blocks=1 00:07:10.164 00:07:10.164 ' 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.164 --rc genhtml_branch_coverage=1 00:07:10.164 --rc genhtml_function_coverage=1 00:07:10.164 --rc genhtml_legend=1 00:07:10.164 --rc geninfo_all_blocks=1 00:07:10.164 --rc geninfo_unexecuted_blocks=1 00:07:10.164 00:07:10.164 ' 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.164 --rc genhtml_branch_coverage=1 00:07:10.164 --rc genhtml_function_coverage=1 00:07:10.164 --rc genhtml_legend=1 00:07:10.164 --rc geninfo_all_blocks=1 00:07:10.164 --rc geninfo_unexecuted_blocks=1 00:07:10.164 00:07:10.164 ' 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.164 --rc genhtml_branch_coverage=1 00:07:10.164 --rc genhtml_function_coverage=1 00:07:10.164 --rc genhtml_legend=1 00:07:10.164 --rc geninfo_all_blocks=1 00:07:10.164 --rc geninfo_unexecuted_blocks=1 00:07:10.164 00:07:10.164 ' 00:07:10.164 10:20:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:10.164 10:20:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:10.164 10:20:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:10.164 10:20:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.164 10:20:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.164 ************************************ 00:07:10.164 START TEST event_perf 00:07:10.164 ************************************ 00:07:10.164 10:20:45 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:10.164 Running I/O for 1 seconds...[2024-12-10 10:20:45.169031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.164 [2024-12-10 10:20:45.169288] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71091 ] 00:07:10.164 [2024-12-10 10:20:45.304771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.165 [2024-12-10 10:20:45.339421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.165 [2024-12-10 10:20:45.339520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.165 [2024-12-10 10:20:45.339647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.165 Running I/O for 1 seconds...[2024-12-10 10:20:45.339647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.540 00:07:11.540 lcore 0: 201877 00:07:11.540 lcore 1: 201876 00:07:11.540 lcore 2: 201876 00:07:11.540 lcore 3: 201876 00:07:11.540 done. 00:07:11.540 ************************************ 00:07:11.540 END TEST event_perf 00:07:11.540 ************************************ 00:07:11.540 00:07:11.540 real 0m1.239s 00:07:11.540 user 0m4.077s 00:07:11.540 sys 0m0.043s 00:07:11.540 10:20:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.540 10:20:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.540 10:20:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:11.540 10:20:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:11.540 10:20:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.540 10:20:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.540 ************************************ 00:07:11.540 START TEST event_reactor 00:07:11.540 ************************************ 00:07:11.540 10:20:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:11.540 [2024-12-10 10:20:46.461935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.540 [2024-12-10 10:20:46.462027] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71129 ] 00:07:11.540 [2024-12-10 10:20:46.598517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.540 [2024-12-10 10:20:46.630676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.476 test_start 00:07:12.476 oneshot 00:07:12.476 tick 100 00:07:12.476 tick 100 00:07:12.476 tick 250 00:07:12.476 tick 100 00:07:12.476 tick 100 00:07:12.476 tick 100 00:07:12.476 tick 250 00:07:12.476 tick 500 00:07:12.476 tick 100 00:07:12.476 tick 100 00:07:12.476 tick 250 00:07:12.476 tick 100 00:07:12.476 tick 100 00:07:12.476 test_end 00:07:12.476 00:07:12.476 real 0m1.237s 00:07:12.476 user 0m1.095s 00:07:12.476 sys 0m0.037s 00:07:12.476 10:20:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.476 ************************************ 00:07:12.476 END TEST event_reactor 00:07:12.476 ************************************ 00:07:12.476 10:20:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:12.735 10:20:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:12.735 10:20:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:12.735 10:20:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.735 10:20:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.735 ************************************ 00:07:12.735 START TEST event_reactor_perf 00:07:12.735 ************************************ 00:07:12.735 10:20:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:12.735 [2024-12-10 10:20:47.748309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.735 [2024-12-10 10:20:47.748420] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71165 ] 00:07:12.735 [2024-12-10 10:20:47.884742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.735 [2024-12-10 10:20:47.920159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.111 test_start 00:07:14.111 test_end 00:07:14.111 Performance: 451170 events per second 00:07:14.111 ************************************ 00:07:14.111 END TEST event_reactor_perf 00:07:14.111 ************************************ 00:07:14.111 00:07:14.111 real 0m1.238s 00:07:14.111 user 0m1.097s 00:07:14.111 sys 0m0.036s 00:07:14.111 10:20:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.111 10:20:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.111 10:20:49 event -- event/event.sh@49 -- # uname -s 00:07:14.111 10:20:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:14.111 10:20:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:14.111 10:20:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.111 10:20:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.111 10:20:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.111 ************************************ 00:07:14.111 START TEST event_scheduler 00:07:14.111 ************************************ 00:07:14.111 10:20:49 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:14.111 * Looking for test storage... 00:07:14.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:14.111 10:20:49 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:14.111 10:20:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:14.111 10:20:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:14.111 10:20:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:14.111 10:20:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.111 10:20:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.111 10:20:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.112 10:20:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.112 --rc genhtml_branch_coverage=1 00:07:14.112 --rc genhtml_function_coverage=1 00:07:14.112 --rc genhtml_legend=1 00:07:14.112 --rc geninfo_all_blocks=1 00:07:14.112 --rc geninfo_unexecuted_blocks=1 00:07:14.112 00:07:14.112 ' 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.112 --rc genhtml_branch_coverage=1 00:07:14.112 --rc genhtml_function_coverage=1 00:07:14.112 --rc genhtml_legend=1 00:07:14.112 --rc geninfo_all_blocks=1 00:07:14.112 --rc geninfo_unexecuted_blocks=1 00:07:14.112 00:07:14.112 ' 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.112 --rc genhtml_branch_coverage=1 00:07:14.112 --rc genhtml_function_coverage=1 00:07:14.112 --rc genhtml_legend=1 00:07:14.112 --rc geninfo_all_blocks=1 00:07:14.112 --rc geninfo_unexecuted_blocks=1 00:07:14.112 00:07:14.112 ' 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.112 --rc genhtml_branch_coverage=1 00:07:14.112 --rc genhtml_function_coverage=1 00:07:14.112 --rc genhtml_legend=1 00:07:14.112 --rc geninfo_all_blocks=1 00:07:14.112 --rc geninfo_unexecuted_blocks=1 00:07:14.112 00:07:14.112 ' 00:07:14.112 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:14.112 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71229 00:07:14.112 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:14.112 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71229 00:07:14.112 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71229 ']' 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.112 10:20:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.112 [2024-12-10 10:20:49.271677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.112 [2024-12-10 10:20:49.271984] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:07:14.371 [2024-12-10 10:20:49.404341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.371 [2024-12-10 10:20:49.448033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.371 [2024-12-10 10:20:49.449434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.371 [2024-12-10 10:20:49.449620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.371 [2024-12-10 10:20:49.449638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:14.371 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.371 POWER: Cannot set governor of lcore 0 to userspace 00:07:14.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.371 POWER: Cannot set governor of lcore 0 to performance 00:07:14.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.371 POWER: Cannot set governor of lcore 0 to userspace 00:07:14.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.371 POWER: Cannot set governor of lcore 0 to userspace 00:07:14.371 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:14.371 POWER: Unable to set Power Management Environment for lcore 0 00:07:14.371 [2024-12-10 10:20:49.559719] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:14.371 [2024-12-10 10:20:49.559732] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:14.371 [2024-12-10 10:20:49.559746] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:14.371 [2024-12-10 10:20:49.559773] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:14.371 [2024-12-10 10:20:49.559781] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:14.371 [2024-12-10 10:20:49.559788] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.371 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.371 10:20:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.371 [2024-12-10 10:20:49.592713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.630 [2024-12-10 10:20:49.607866] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:14.630 10:20:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.630 10:20:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:14.630 10:20:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.630 10:20:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.630 10:20:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.630 ************************************ 00:07:14.630 START TEST scheduler_create_thread 00:07:14.630 ************************************ 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.630 2 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.630 3 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.630 4 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.630 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 5 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 6 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 7 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 8 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 9 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 10 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.631 10:20:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.566 10:20:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.566 10:20:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:15.566 10:20:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.566 10:20:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.955 10:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.955 10:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:16.955 10:20:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:16.955 10:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.955 10:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.908 ************************************ 00:07:17.908 END TEST scheduler_create_thread 00:07:17.908 ************************************ 00:07:17.908 10:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.908 00:07:17.908 real 0m3.374s 00:07:17.908 user 0m0.018s 00:07:17.908 sys 0m0.007s 00:07:17.908 10:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.908 10:20:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.908 10:20:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:17.908 10:20:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71229 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71229 ']' 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71229 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71229 00:07:17.908 killing process with pid 71229 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71229' 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71229 00:07:17.908 10:20:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71229 00:07:18.166 [2024-12-10 10:20:53.375568] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:18.425 ************************************ 00:07:18.425 END TEST event_scheduler 00:07:18.425 ************************************ 00:07:18.425 00:07:18.425 real 0m4.527s 00:07:18.425 user 0m7.953s 00:07:18.425 sys 0m0.279s 00:07:18.425 10:20:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.425 10:20:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.425 10:20:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:18.425 10:20:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:18.425 10:20:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.425 10:20:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.425 10:20:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.425 ************************************ 00:07:18.425 START TEST app_repeat 00:07:18.425 ************************************ 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71326 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.425 Process app_repeat pid: 71326 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71326' 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:18.425 spdk_app_start Round 0 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:18.425 10:20:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71326 /var/tmp/spdk-nbd.sock 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71326 ']' 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.425 10:20:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.425 [2024-12-10 10:20:53.641643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.425 [2024-12-10 10:20:53.641729] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71326 ] 00:07:18.684 [2024-12-10 10:20:53.777876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.684 [2024-12-10 10:20:53.815613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.684 [2024-12-10 10:20:53.815645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.684 [2024-12-10 10:20:53.846036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.684 10:20:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.684 10:20:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:18.684 10:20:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.942 Malloc0 00:07:18.942 10:20:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.510 Malloc1 00:07:19.510 10:20:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.510 /dev/nbd0 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.510 10:20:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.510 1+0 records in 00:07:19.510 1+0 records out 00:07:19.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363207 s, 11.3 MB/s 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:19.510 10:20:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.769 10:20:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.769 10:20:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:19.769 10:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.769 10:20:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.769 10:20:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:20.032 /dev/nbd1 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.032 1+0 records in 00:07:20.032 1+0 records out 00:07:20.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354979 s, 11.5 MB/s 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.032 10:20:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.032 10:20:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.291 { 00:07:20.291 "nbd_device": "/dev/nbd0", 00:07:20.291 "bdev_name": "Malloc0" 00:07:20.291 }, 00:07:20.291 { 00:07:20.291 "nbd_device": "/dev/nbd1", 00:07:20.291 "bdev_name": "Malloc1" 00:07:20.291 } 00:07:20.291 ]' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.291 { 00:07:20.291 "nbd_device": "/dev/nbd0", 00:07:20.291 "bdev_name": "Malloc0" 00:07:20.291 }, 00:07:20.291 { 00:07:20.291 "nbd_device": "/dev/nbd1", 00:07:20.291 "bdev_name": "Malloc1" 00:07:20.291 } 00:07:20.291 ]' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.291 /dev/nbd1' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.291 /dev/nbd1' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.291 256+0 records in 00:07:20.291 256+0 records out 00:07:20.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107842 s, 97.2 MB/s 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.291 256+0 records in 00:07:20.291 256+0 records out 00:07:20.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215213 s, 48.7 MB/s 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.291 256+0 records in 00:07:20.291 256+0 records out 00:07:20.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258325 s, 40.6 MB/s 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.291 10:20:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.859 10:20:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.117 10:20:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.376 10:20:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.376 10:20:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.636 10:20:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.636 [2024-12-10 10:20:56.842837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.895 [2024-12-10 10:20:56.879535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.895 [2024-12-10 10:20:56.879547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.895 [2024-12-10 10:20:56.912532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.895 [2024-12-10 10:20:56.912621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.895 [2024-12-10 10:20:56.912634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:25.182 spdk_app_start Round 1 00:07:25.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.182 10:20:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:25.182 10:20:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:25.182 10:20:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71326 /var/tmp/spdk-nbd.sock 00:07:25.182 10:20:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71326 ']' 00:07:25.182 10:20:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.182 10:20:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.182 10:20:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.183 10:20:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.183 10:20:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.183 10:21:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.183 10:21:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:25.183 10:21:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.183 Malloc0 00:07:25.183 10:21:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.442 Malloc1 00:07:25.442 10:21:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.442 10:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.701 /dev/nbd0 00:07:25.701 10:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.701 10:21:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.701 1+0 records in 00:07:25.701 1+0 records out 00:07:25.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431745 s, 9.5 MB/s 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.701 10:21:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:25.701 10:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.701 10:21:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.701 10:21:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.959 /dev/nbd1 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.959 1+0 records in 00:07:25.959 1+0 records out 00:07:25.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301114 s, 13.6 MB/s 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.959 10:21:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.959 10:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.527 { 00:07:26.527 "nbd_device": "/dev/nbd0", 00:07:26.527 "bdev_name": "Malloc0" 00:07:26.527 }, 00:07:26.527 { 00:07:26.527 "nbd_device": "/dev/nbd1", 00:07:26.527 "bdev_name": "Malloc1" 00:07:26.527 } 00:07:26.527 ]' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.527 { 00:07:26.527 "nbd_device": "/dev/nbd0", 00:07:26.527 "bdev_name": "Malloc0" 00:07:26.527 }, 00:07:26.527 { 00:07:26.527 "nbd_device": "/dev/nbd1", 00:07:26.527 "bdev_name": "Malloc1" 00:07:26.527 } 00:07:26.527 ]' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.527 /dev/nbd1' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.527 /dev/nbd1' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:26.527 256+0 records in 00:07:26.527 256+0 records out 00:07:26.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00973487 s, 108 MB/s 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.527 256+0 records in 00:07:26.527 256+0 records out 00:07:26.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234337 s, 44.7 MB/s 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.527 256+0 records in 00:07:26.527 256+0 records out 00:07:26.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233718 s, 44.9 MB/s 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.527 10:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.786 10:21:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.045 10:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.304 10:21:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.304 10:21:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.872 10:21:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.872 [2024-12-10 10:21:02.923722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.872 [2024-12-10 10:21:02.959122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.872 [2024-12-10 10:21:02.959134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.872 [2024-12-10 10:21:02.990867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.872 [2024-12-10 10:21:02.990979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.872 [2024-12-10 10:21:02.990992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.160 spdk_app_start Round 2 00:07:31.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.160 10:21:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.160 10:21:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:31.160 10:21:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71326 /var/tmp/spdk-nbd.sock 00:07:31.160 10:21:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71326 ']' 00:07:31.160 10:21:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.160 10:21:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.160 10:21:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.160 10:21:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.160 10:21:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.160 10:21:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.160 10:21:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:31.160 10:21:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.160 Malloc0 00:07:31.160 10:21:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.420 Malloc1 00:07:31.420 10:21:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.420 10:21:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:31.988 /dev/nbd0 00:07:31.988 10:21:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:31.988 10:21:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.988 1+0 records in 00:07:31.988 1+0 records out 00:07:31.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184128 s, 22.2 MB/s 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:31.988 10:21:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:31.988 10:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.988 10:21:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.988 10:21:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.988 /dev/nbd1 00:07:32.247 10:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.247 10:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.247 1+0 records in 00:07:32.247 1+0 records out 00:07:32.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303996 s, 13.5 MB/s 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.247 10:21:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:32.247 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.247 10:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.247 10:21:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.248 10:21:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.248 10:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.507 { 00:07:32.507 "nbd_device": "/dev/nbd0", 00:07:32.507 "bdev_name": "Malloc0" 00:07:32.507 }, 00:07:32.507 { 00:07:32.507 "nbd_device": "/dev/nbd1", 00:07:32.507 "bdev_name": "Malloc1" 00:07:32.507 } 00:07:32.507 ]' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.507 { 00:07:32.507 "nbd_device": "/dev/nbd0", 00:07:32.507 "bdev_name": "Malloc0" 00:07:32.507 }, 00:07:32.507 { 00:07:32.507 "nbd_device": "/dev/nbd1", 00:07:32.507 "bdev_name": "Malloc1" 00:07:32.507 } 00:07:32.507 ]' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.507 /dev/nbd1' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.507 /dev/nbd1' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.507 256+0 records in 00:07:32.507 256+0 records out 00:07:32.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075564 s, 139 MB/s 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.507 256+0 records in 00:07:32.507 256+0 records out 00:07:32.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236612 s, 44.3 MB/s 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.507 256+0 records in 00:07:32.507 256+0 records out 00:07:32.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320926 s, 32.7 MB/s 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.507 10:21:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.767 10:21:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.026 10:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.285 10:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.285 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.285 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.544 10:21:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.544 10:21:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.803 10:21:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.803 [2024-12-10 10:21:08.913572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.803 [2024-12-10 10:21:08.948442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.803 [2024-12-10 10:21:08.948467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.803 [2024-12-10 10:21:08.978666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.803 [2024-12-10 10:21:08.978760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.803 [2024-12-10 10:21:08.978774] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.092 10:21:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71326 /var/tmp/spdk-nbd.sock 00:07:37.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.092 10:21:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71326 ']' 00:07:37.092 10:21:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.092 10:21:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.092 10:21:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.092 10:21:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.092 10:21:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:37.092 10:21:12 event.app_repeat -- event/event.sh@39 -- # killprocess 71326 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71326 ']' 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71326 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71326 00:07:37.092 killing process with pid 71326 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71326' 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71326 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71326 00:07:37.092 spdk_app_start is called in Round 0. 00:07:37.092 Shutdown signal received, stop current app iteration 00:07:37.092 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:37.092 spdk_app_start is called in Round 1. 00:07:37.092 Shutdown signal received, stop current app iteration 00:07:37.092 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:37.092 spdk_app_start is called in Round 2. 00:07:37.092 Shutdown signal received, stop current app iteration 00:07:37.092 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:37.092 spdk_app_start is called in Round 3. 00:07:37.092 Shutdown signal received, stop current app iteration 00:07:37.092 10:21:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:37.092 10:21:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:37.092 00:07:37.092 real 0m18.654s 00:07:37.092 user 0m42.774s 00:07:37.092 sys 0m2.623s 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.092 ************************************ 00:07:37.092 END TEST app_repeat 00:07:37.092 ************************************ 00:07:37.092 10:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.092 10:21:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:37.092 10:21:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.092 10:21:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.092 10:21:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.092 10:21:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.092 ************************************ 00:07:37.092 START TEST cpu_locks 00:07:37.092 ************************************ 00:07:37.092 10:21:12 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.351 * Looking for test storage... 00:07:37.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:37.351 10:21:12 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:37.351 10:21:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:37.351 10:21:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:37.351 10:21:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.351 10:21:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:37.351 10:21:12 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.351 10:21:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.351 --rc genhtml_branch_coverage=1 00:07:37.352 --rc genhtml_function_coverage=1 00:07:37.352 --rc genhtml_legend=1 00:07:37.352 --rc geninfo_all_blocks=1 00:07:37.352 --rc geninfo_unexecuted_blocks=1 00:07:37.352 00:07:37.352 ' 00:07:37.352 10:21:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:37.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.352 --rc genhtml_branch_coverage=1 00:07:37.352 --rc genhtml_function_coverage=1 00:07:37.352 --rc genhtml_legend=1 00:07:37.352 --rc geninfo_all_blocks=1 00:07:37.352 --rc geninfo_unexecuted_blocks=1 00:07:37.352 00:07:37.352 ' 00:07:37.352 10:21:12 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:37.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.352 --rc genhtml_branch_coverage=1 00:07:37.352 --rc genhtml_function_coverage=1 00:07:37.352 --rc genhtml_legend=1 00:07:37.352 --rc geninfo_all_blocks=1 00:07:37.352 --rc geninfo_unexecuted_blocks=1 00:07:37.352 00:07:37.352 ' 00:07:37.352 10:21:12 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:37.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.352 --rc genhtml_branch_coverage=1 00:07:37.352 --rc genhtml_function_coverage=1 00:07:37.352 --rc genhtml_legend=1 00:07:37.352 --rc geninfo_all_blocks=1 00:07:37.352 --rc geninfo_unexecuted_blocks=1 00:07:37.352 00:07:37.352 ' 00:07:37.352 10:21:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:37.352 10:21:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:37.352 10:21:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:37.352 10:21:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:37.352 10:21:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.352 10:21:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.352 10:21:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.352 ************************************ 00:07:37.352 START TEST default_locks 00:07:37.352 ************************************ 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71769 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71769 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71769 ']' 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.352 10:21:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.611 [2024-12-10 10:21:12.578767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.611 [2024-12-10 10:21:12.578867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71769 ] 00:07:37.611 [2024-12-10 10:21:12.716310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.611 [2024-12-10 10:21:12.759843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.611 [2024-12-10 10:21:12.803182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.586 10:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.586 10:21:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:38.586 10:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71769 00:07:38.586 10:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71769 00:07:38.586 10:21:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71769 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71769 ']' 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71769 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71769 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.845 killing process with pid 71769 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71769' 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71769 00:07:38.845 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71769 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71769 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71769 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71769 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71769 ']' 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.104 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71769) - No such process 00:07:39.104 ERROR: process (pid: 71769) is no longer running 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.104 00:07:39.104 real 0m1.779s 00:07:39.104 user 0m2.034s 00:07:39.104 sys 0m0.494s 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.104 10:21:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.104 ************************************ 00:07:39.104 END TEST default_locks 00:07:39.104 ************************************ 00:07:39.363 10:21:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:39.363 10:21:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.363 10:21:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.363 10:21:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.363 ************************************ 00:07:39.363 START TEST default_locks_via_rpc 00:07:39.363 ************************************ 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:39.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71823 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71823 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71823 ']' 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.363 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.363 [2024-12-10 10:21:14.421967] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.363 [2024-12-10 10:21:14.422070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71823 ] 00:07:39.363 [2024-12-10 10:21:14.561445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.622 [2024-12-10 10:21:14.595673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.622 [2024-12-10 10:21:14.631496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71823 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.622 10:21:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71823 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71823 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71823 ']' 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71823 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71823 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.189 killing process with pid 71823 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71823' 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71823 00:07:40.189 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71823 00:07:40.449 00:07:40.449 real 0m1.178s 00:07:40.449 user 0m1.248s 00:07:40.449 sys 0m0.474s 00:07:40.449 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.449 10:21:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 ************************************ 00:07:40.449 END TEST default_locks_via_rpc 00:07:40.449 ************************************ 00:07:40.449 10:21:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:40.449 10:21:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.449 10:21:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.449 10:21:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 ************************************ 00:07:40.449 START TEST non_locking_app_on_locked_coremask 00:07:40.449 ************************************ 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71862 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71862 /var/tmp/spdk.sock 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71862 ']' 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.449 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 [2024-12-10 10:21:15.664843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.449 [2024-12-10 10:21:15.664977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71862 ] 00:07:40.708 [2024-12-10 10:21:15.806302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.708 [2024-12-10 10:21:15.841883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.708 [2024-12-10 10:21:15.878183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71865 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71865 /var/tmp/spdk2.sock 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71865 ']' 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.967 10:21:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.967 [2024-12-10 10:21:16.070809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.967 [2024-12-10 10:21:16.070936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71865 ] 00:07:41.226 [2024-12-10 10:21:16.210348] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.226 [2024-12-10 10:21:16.213468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.226 [2024-12-10 10:21:16.287219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.226 [2024-12-10 10:21:16.363863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.794 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.794 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.794 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71862 00:07:41.794 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71862 00:07:41.794 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.731 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71862 00:07:42.731 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71862 ']' 00:07:42.731 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71862 00:07:42.731 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:42.731 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.731 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71862 00:07:42.990 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.990 killing process with pid 71862 00:07:42.990 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.990 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71862' 00:07:42.990 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71862 00:07:42.990 10:21:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71862 00:07:43.249 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71865 00:07:43.249 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71865 ']' 00:07:43.249 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71865 00:07:43.249 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.249 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.249 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71865 00:07:43.508 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.508 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.508 killing process with pid 71865 00:07:43.508 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71865' 00:07:43.508 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71865 00:07:43.508 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71865 00:07:43.767 00:07:43.767 real 0m3.153s 00:07:43.767 user 0m3.626s 00:07:43.767 sys 0m0.975s 00:07:43.767 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.767 10:21:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.767 ************************************ 00:07:43.767 END TEST non_locking_app_on_locked_coremask 00:07:43.767 ************************************ 00:07:43.767 10:21:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:43.767 10:21:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.767 10:21:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.767 10:21:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.767 ************************************ 00:07:43.767 START TEST locking_app_on_unlocked_coremask 00:07:43.767 ************************************ 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71932 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71932 /var/tmp/spdk.sock 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71932 ']' 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.767 10:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.767 [2024-12-10 10:21:18.862026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.767 [2024-12-10 10:21:18.862149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71932 ] 00:07:43.767 [2024-12-10 10:21:18.992072] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:43.767 [2024-12-10 10:21:18.992150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.026 [2024-12-10 10:21:19.025906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.027 [2024-12-10 10:21:19.060069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71935 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71935 /var/tmp/spdk2.sock 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71935 ']' 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.027 10:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.027 [2024-12-10 10:21:19.246586] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.027 [2024-12-10 10:21:19.246699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:07:44.286 [2024-12-10 10:21:19.386935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.286 [2024-12-10 10:21:19.465007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.545 [2024-12-10 10:21:19.536967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.113 10:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.113 10:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.113 10:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71935 00:07:45.113 10:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71935 00:07:45.113 10:21:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71932 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71932 ']' 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71932 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71932 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.049 killing process with pid 71932 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71932' 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71932 00:07:46.049 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71932 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71935 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71935 ']' 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71935 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71935 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.618 killing process with pid 71935 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71935' 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71935 00:07:46.618 10:21:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71935 00:07:46.877 00:07:46.877 real 0m3.214s 00:07:46.877 user 0m3.794s 00:07:46.877 sys 0m0.930s 00:07:46.877 10:21:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.877 10:21:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.877 ************************************ 00:07:46.877 END TEST locking_app_on_unlocked_coremask 00:07:46.877 ************************************ 00:07:46.877 10:21:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:46.877 10:21:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.878 10:21:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.878 10:21:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.878 ************************************ 00:07:46.878 START TEST locking_app_on_locked_coremask 00:07:46.878 ************************************ 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72002 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72002 /var/tmp/spdk.sock 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72002 ']' 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.878 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.137 [2024-12-10 10:21:22.130186] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.137 [2024-12-10 10:21:22.130261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72002 ] 00:07:47.137 [2024-12-10 10:21:22.260044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.137 [2024-12-10 10:21:22.297464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.137 [2024-12-10 10:21:22.334137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72005 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72005 /var/tmp/spdk2.sock 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72005 /var/tmp/spdk2.sock 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72005 /var/tmp/spdk2.sock 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72005 ']' 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.396 10:21:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.396 [2024-12-10 10:21:22.530058] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.396 [2024-12-10 10:21:22.530359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72005 ] 00:07:47.657 [2024-12-10 10:21:22.673028] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72002 has claimed it. 00:07:47.657 [2024-12-10 10:21:22.673204] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:48.225 ERROR: process (pid: 72005) is no longer running 00:07:48.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72005) - No such process 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72002 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72002 00:07:48.225 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72002 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72002 ']' 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72002 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72002 00:07:48.793 killing process with pid 72002 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72002' 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72002 00:07:48.793 10:21:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72002 00:07:48.793 ************************************ 00:07:48.793 END TEST locking_app_on_locked_coremask 00:07:48.793 ************************************ 00:07:48.793 00:07:48.793 real 0m1.943s 00:07:48.793 user 0m2.352s 00:07:48.793 sys 0m0.539s 00:07:48.793 10:21:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.793 10:21:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.053 10:21:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:49.053 10:21:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.053 10:21:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.053 10:21:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.053 ************************************ 00:07:49.053 START TEST locking_overlapped_coremask 00:07:49.053 ************************************ 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72056 00:07:49.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72056 /var/tmp/spdk.sock 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72056 ']' 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.053 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.053 [2024-12-10 10:21:24.137095] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.053 [2024-12-10 10:21:24.137194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72056 ] 00:07:49.312 [2024-12-10 10:21:24.279003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.312 [2024-12-10 10:21:24.315520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.312 [2024-12-10 10:21:24.315654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.312 [2024-12-10 10:21:24.315914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.312 [2024-12-10 10:21:24.351149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72061 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72061 /var/tmp/spdk2.sock 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72061 /var/tmp/spdk2.sock 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:49.312 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72061 /var/tmp/spdk2.sock 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72061 ']' 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.313 10:21:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.313 [2024-12-10 10:21:24.534959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.313 [2024-12-10 10:21:24.535239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72061 ] 00:07:49.572 [2024-12-10 10:21:24.681682] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72056 has claimed it. 00:07:49.572 [2024-12-10 10:21:24.681785] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:50.140 ERROR: process (pid: 72061) is no longer running 00:07:50.140 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72061) - No such process 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:50.140 10:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72056 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72056 ']' 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72056 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72056 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72056' 00:07:50.141 killing process with pid 72056 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72056 00:07:50.141 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72056 00:07:50.400 00:07:50.400 real 0m1.462s 00:07:50.400 user 0m4.023s 00:07:50.400 sys 0m0.310s 00:07:50.400 ************************************ 00:07:50.400 END TEST locking_overlapped_coremask 00:07:50.400 ************************************ 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.400 10:21:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:50.400 10:21:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.400 10:21:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.400 10:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.400 ************************************ 00:07:50.400 START TEST locking_overlapped_coremask_via_rpc 00:07:50.400 ************************************ 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72107 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72107 /var/tmp/spdk.sock 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72107 ']' 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.400 10:21:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.660 [2024-12-10 10:21:25.639064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.660 [2024-12-10 10:21:25.639312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72107 ] 00:07:50.660 [2024-12-10 10:21:25.769854] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.660 [2024-12-10 10:21:25.769888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.660 [2024-12-10 10:21:25.803862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.660 [2024-12-10 10:21:25.803974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.660 [2024-12-10 10:21:25.804210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.660 [2024-12-10 10:21:25.843204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72125 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72125 /var/tmp/spdk2.sock 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72125 ']' 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.597 10:21:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.597 [2024-12-10 10:21:26.669870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.597 [2024-12-10 10:21:26.669969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72125 ] 00:07:51.597 [2024-12-10 10:21:26.812005] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.597 [2024-12-10 10:21:26.812051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.856 [2024-12-10 10:21:26.887091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.856 [2024-12-10 10:21:26.892365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.856 [2024-12-10 10:21:26.892370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.856 [2024-12-10 10:21:26.962950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.792 [2024-12-10 10:21:27.691562] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72107 has claimed it. 00:07:52.792 request: 00:07:52.792 { 00:07:52.792 "method": "framework_enable_cpumask_locks", 00:07:52.792 "req_id": 1 00:07:52.792 } 00:07:52.792 Got JSON-RPC error response 00:07:52.792 response: 00:07:52.792 { 00:07:52.792 "code": -32603, 00:07:52.792 "message": "Failed to claim CPU core: 2" 00:07:52.792 } 00:07:52.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72107 /var/tmp/spdk.sock 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72107 ']' 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.792 10:21:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72125 /var/tmp/spdk2.sock 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72125 ']' 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.051 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:53.310 00:07:53.310 real 0m2.788s 00:07:53.310 user 0m1.522s 00:07:53.310 sys 0m0.186s 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.310 10:21:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.310 ************************************ 00:07:53.310 END TEST locking_overlapped_coremask_via_rpc 00:07:53.310 ************************************ 00:07:53.310 10:21:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:53.310 10:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72107 ]] 00:07:53.310 10:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72107 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72107 ']' 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72107 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72107 00:07:53.310 killing process with pid 72107 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72107' 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72107 00:07:53.310 10:21:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72107 00:07:53.569 10:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72125 ]] 00:07:53.569 10:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72125 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72125 ']' 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72125 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72125 00:07:53.569 killing process with pid 72125 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72125' 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72125 00:07:53.569 10:21:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72125 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72107 ]] 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72107 00:07:53.828 Process with pid 72107 is not found 00:07:53.828 Process with pid 72125 is not found 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72107 ']' 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72107 00:07:53.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72107) - No such process 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72107 is not found' 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72125 ]] 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72125 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72125 ']' 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72125 00:07:53.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72125) - No such process 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72125 is not found' 00:07:53.828 10:21:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:53.828 00:07:53.828 real 0m16.648s 00:07:53.828 user 0m31.422s 00:07:53.828 sys 0m4.593s 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.828 ************************************ 00:07:53.828 END TEST cpu_locks 00:07:53.828 ************************************ 00:07:53.828 10:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.828 ************************************ 00:07:53.828 END TEST event 00:07:53.828 ************************************ 00:07:53.828 00:07:53.828 real 0m44.050s 00:07:53.828 user 1m28.613s 00:07:53.828 sys 0m7.890s 00:07:53.828 10:21:28 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.828 10:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.828 10:21:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:53.828 10:21:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.828 10:21:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.828 10:21:29 -- common/autotest_common.sh@10 -- # set +x 00:07:53.828 ************************************ 00:07:53.828 START TEST thread 00:07:53.828 ************************************ 00:07:53.828 10:21:29 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:54.087 * Looking for test storage... 00:07:54.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.087 10:21:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.087 10:21:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.087 10:21:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.087 10:21:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.087 10:21:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.087 10:21:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.087 10:21:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.087 10:21:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.087 10:21:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.087 10:21:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.087 10:21:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.087 10:21:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:54.087 10:21:29 thread -- scripts/common.sh@345 -- # : 1 00:07:54.087 10:21:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.087 10:21:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.087 10:21:29 thread -- scripts/common.sh@365 -- # decimal 1 00:07:54.087 10:21:29 thread -- scripts/common.sh@353 -- # local d=1 00:07:54.087 10:21:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.087 10:21:29 thread -- scripts/common.sh@355 -- # echo 1 00:07:54.087 10:21:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.087 10:21:29 thread -- scripts/common.sh@366 -- # decimal 2 00:07:54.087 10:21:29 thread -- scripts/common.sh@353 -- # local d=2 00:07:54.087 10:21:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.087 10:21:29 thread -- scripts/common.sh@355 -- # echo 2 00:07:54.087 10:21:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.087 10:21:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.087 10:21:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.087 10:21:29 thread -- scripts/common.sh@368 -- # return 0 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.087 --rc genhtml_branch_coverage=1 00:07:54.087 --rc genhtml_function_coverage=1 00:07:54.087 --rc genhtml_legend=1 00:07:54.087 --rc geninfo_all_blocks=1 00:07:54.087 --rc geninfo_unexecuted_blocks=1 00:07:54.087 00:07:54.087 ' 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.087 --rc genhtml_branch_coverage=1 00:07:54.087 --rc genhtml_function_coverage=1 00:07:54.087 --rc genhtml_legend=1 00:07:54.087 --rc geninfo_all_blocks=1 00:07:54.087 --rc geninfo_unexecuted_blocks=1 00:07:54.087 00:07:54.087 ' 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.087 --rc genhtml_branch_coverage=1 00:07:54.087 --rc genhtml_function_coverage=1 00:07:54.087 --rc genhtml_legend=1 00:07:54.087 --rc geninfo_all_blocks=1 00:07:54.087 --rc geninfo_unexecuted_blocks=1 00:07:54.087 00:07:54.087 ' 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.087 --rc genhtml_branch_coverage=1 00:07:54.087 --rc genhtml_function_coverage=1 00:07:54.087 --rc genhtml_legend=1 00:07:54.087 --rc geninfo_all_blocks=1 00:07:54.087 --rc geninfo_unexecuted_blocks=1 00:07:54.087 00:07:54.087 ' 00:07:54.087 10:21:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.087 10:21:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.087 ************************************ 00:07:54.087 START TEST thread_poller_perf 00:07:54.087 ************************************ 00:07:54.087 10:21:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:54.087 [2024-12-10 10:21:29.232366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.087 [2024-12-10 10:21:29.232476] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72255 ] 00:07:54.346 [2024-12-10 10:21:29.367079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.346 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:54.346 [2024-12-10 10:21:29.408712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.282 [2024-12-10T10:21:30.509Z] ====================================== 00:07:55.282 [2024-12-10T10:21:30.510Z] busy:2210503212 (cyc) 00:07:55.283 [2024-12-10T10:21:30.510Z] total_run_count: 362000 00:07:55.283 [2024-12-10T10:21:30.510Z] tsc_hz: 2200000000 (cyc) 00:07:55.283 [2024-12-10T10:21:30.510Z] ====================================== 00:07:55.283 [2024-12-10T10:21:30.510Z] poller_cost: 6106 (cyc), 2775 (nsec) 00:07:55.283 00:07:55.283 real 0m1.248s 00:07:55.283 user 0m1.099s 00:07:55.283 sys 0m0.043s 00:07:55.283 10:21:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.283 10:21:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.283 ************************************ 00:07:55.283 END TEST thread_poller_perf 00:07:55.283 ************************************ 00:07:55.542 10:21:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:55.542 10:21:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:55.542 10:21:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.542 10:21:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 ************************************ 00:07:55.542 START TEST thread_poller_perf 00:07:55.542 ************************************ 00:07:55.542 10:21:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:55.542 [2024-12-10 10:21:30.532611] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:55.542 [2024-12-10 10:21:30.532711] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72285 ] 00:07:55.542 [2024-12-10 10:21:30.668988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.542 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:55.542 [2024-12-10 10:21:30.701778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.919 [2024-12-10T10:21:32.146Z] ====================================== 00:07:56.919 [2024-12-10T10:21:32.146Z] busy:2201786624 (cyc) 00:07:56.919 [2024-12-10T10:21:32.146Z] total_run_count: 4959000 00:07:56.919 [2024-12-10T10:21:32.146Z] tsc_hz: 2200000000 (cyc) 00:07:56.919 [2024-12-10T10:21:32.146Z] ====================================== 00:07:56.919 [2024-12-10T10:21:32.146Z] poller_cost: 443 (cyc), 201 (nsec) 00:07:56.919 00:07:56.919 real 0m1.231s 00:07:56.919 user 0m1.085s 00:07:56.919 sys 0m0.040s 00:07:56.919 ************************************ 00:07:56.919 END TEST thread_poller_perf 00:07:56.919 ************************************ 00:07:56.919 10:21:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.919 10:21:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 10:21:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:56.919 ************************************ 00:07:56.919 END TEST thread 00:07:56.919 ************************************ 00:07:56.919 00:07:56.919 real 0m2.758s 00:07:56.919 user 0m2.323s 00:07:56.919 sys 0m0.220s 00:07:56.919 10:21:31 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.919 10:21:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 10:21:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:56.919 10:21:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.919 10:21:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.919 10:21:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.919 10:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 ************************************ 00:07:56.919 START TEST app_cmdline 00:07:56.919 ************************************ 00:07:56.919 10:21:31 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.919 * Looking for test storage... 00:07:56.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.919 10:21:31 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:56.919 10:21:31 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.919 10:21:31 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:56.919 10:21:32 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.919 10:21:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.920 10:21:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.920 --rc genhtml_branch_coverage=1 00:07:56.920 --rc genhtml_function_coverage=1 00:07:56.920 --rc genhtml_legend=1 00:07:56.920 --rc geninfo_all_blocks=1 00:07:56.920 --rc geninfo_unexecuted_blocks=1 00:07:56.920 00:07:56.920 ' 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.920 --rc genhtml_branch_coverage=1 00:07:56.920 --rc genhtml_function_coverage=1 00:07:56.920 --rc genhtml_legend=1 00:07:56.920 --rc geninfo_all_blocks=1 00:07:56.920 --rc geninfo_unexecuted_blocks=1 00:07:56.920 00:07:56.920 ' 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.920 --rc genhtml_branch_coverage=1 00:07:56.920 --rc genhtml_function_coverage=1 00:07:56.920 --rc genhtml_legend=1 00:07:56.920 --rc geninfo_all_blocks=1 00:07:56.920 --rc geninfo_unexecuted_blocks=1 00:07:56.920 00:07:56.920 ' 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.920 --rc genhtml_branch_coverage=1 00:07:56.920 --rc genhtml_function_coverage=1 00:07:56.920 --rc genhtml_legend=1 00:07:56.920 --rc geninfo_all_blocks=1 00:07:56.920 --rc geninfo_unexecuted_blocks=1 00:07:56.920 00:07:56.920 ' 00:07:56.920 10:21:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.920 10:21:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72368 00:07:56.920 10:21:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.920 10:21:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72368 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72368 ']' 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.920 10:21:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.920 [2024-12-10 10:21:32.104994] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.920 [2024-12-10 10:21:32.105079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72368 ] 00:07:57.179 [2024-12-10 10:21:32.244733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.179 [2024-12-10 10:21:32.276753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.179 [2024-12-10 10:21:32.309742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.438 10:21:32 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.438 10:21:32 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:57.438 10:21:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:57.697 { 00:07:57.697 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:57.697 "fields": { 00:07:57.697 "major": 24, 00:07:57.697 "minor": 9, 00:07:57.697 "patch": 1, 00:07:57.697 "suffix": "-pre", 00:07:57.697 "commit": "b18e1bd62" 00:07:57.697 } 00:07:57.697 } 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:57.697 10:21:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:57.697 10:21:32 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.967 request: 00:07:57.967 { 00:07:57.967 "method": "env_dpdk_get_mem_stats", 00:07:57.967 "req_id": 1 00:07:57.967 } 00:07:57.967 Got JSON-RPC error response 00:07:57.967 response: 00:07:57.967 { 00:07:57.967 "code": -32601, 00:07:57.967 "message": "Method not found" 00:07:57.967 } 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.967 10:21:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72368 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72368 ']' 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72368 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72368 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.967 killing process with pid 72368 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72368' 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 72368 00:07:57.967 10:21:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 72368 00:07:58.232 00:07:58.232 real 0m1.441s 00:07:58.232 user 0m1.917s 00:07:58.232 sys 0m0.355s 00:07:58.232 10:21:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.232 10:21:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.232 ************************************ 00:07:58.232 END TEST app_cmdline 00:07:58.232 ************************************ 00:07:58.232 10:21:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.232 10:21:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.232 10:21:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.232 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:58.232 ************************************ 00:07:58.232 START TEST version 00:07:58.232 ************************************ 00:07:58.232 10:21:33 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.232 * Looking for test storage... 00:07:58.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:58.232 10:21:33 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:58.232 10:21:33 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:58.232 10:21:33 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:58.491 10:21:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.491 10:21:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.491 10:21:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.491 10:21:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.491 10:21:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.491 10:21:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.491 10:21:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.491 10:21:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.491 10:21:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.491 10:21:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.491 10:21:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.491 10:21:33 version -- scripts/common.sh@344 -- # case "$op" in 00:07:58.491 10:21:33 version -- scripts/common.sh@345 -- # : 1 00:07:58.491 10:21:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.491 10:21:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.491 10:21:33 version -- scripts/common.sh@365 -- # decimal 1 00:07:58.491 10:21:33 version -- scripts/common.sh@353 -- # local d=1 00:07:58.491 10:21:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.491 10:21:33 version -- scripts/common.sh@355 -- # echo 1 00:07:58.491 10:21:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.491 10:21:33 version -- scripts/common.sh@366 -- # decimal 2 00:07:58.491 10:21:33 version -- scripts/common.sh@353 -- # local d=2 00:07:58.491 10:21:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.491 10:21:33 version -- scripts/common.sh@355 -- # echo 2 00:07:58.491 10:21:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.491 10:21:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.491 10:21:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.491 10:21:33 version -- scripts/common.sh@368 -- # return 0 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.491 --rc genhtml_branch_coverage=1 00:07:58.491 --rc genhtml_function_coverage=1 00:07:58.491 --rc genhtml_legend=1 00:07:58.491 --rc geninfo_all_blocks=1 00:07:58.491 --rc geninfo_unexecuted_blocks=1 00:07:58.491 00:07:58.491 ' 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.491 --rc genhtml_branch_coverage=1 00:07:58.491 --rc genhtml_function_coverage=1 00:07:58.491 --rc genhtml_legend=1 00:07:58.491 --rc geninfo_all_blocks=1 00:07:58.491 --rc geninfo_unexecuted_blocks=1 00:07:58.491 00:07:58.491 ' 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.491 --rc genhtml_branch_coverage=1 00:07:58.491 --rc genhtml_function_coverage=1 00:07:58.491 --rc genhtml_legend=1 00:07:58.491 --rc geninfo_all_blocks=1 00:07:58.491 --rc geninfo_unexecuted_blocks=1 00:07:58.491 00:07:58.491 ' 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.491 --rc genhtml_branch_coverage=1 00:07:58.491 --rc genhtml_function_coverage=1 00:07:58.491 --rc genhtml_legend=1 00:07:58.491 --rc geninfo_all_blocks=1 00:07:58.491 --rc geninfo_unexecuted_blocks=1 00:07:58.491 00:07:58.491 ' 00:07:58.491 10:21:33 version -- app/version.sh@17 -- # get_header_version major 00:07:58.491 10:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.491 10:21:33 version -- app/version.sh@17 -- # major=24 00:07:58.491 10:21:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:58.491 10:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.491 10:21:33 version -- app/version.sh@18 -- # minor=9 00:07:58.491 10:21:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:58.491 10:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.491 10:21:33 version -- app/version.sh@19 -- # patch=1 00:07:58.491 10:21:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:58.491 10:21:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # cut -f2 00:07:58.491 10:21:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:58.491 10:21:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:58.491 10:21:33 version -- app/version.sh@22 -- # version=24.9 00:07:58.491 10:21:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:58.491 10:21:33 version -- app/version.sh@25 -- # version=24.9.1 00:07:58.491 10:21:33 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:58.491 10:21:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:58.491 10:21:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:58.491 10:21:33 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:58.491 10:21:33 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:58.491 00:07:58.491 real 0m0.251s 00:07:58.491 user 0m0.169s 00:07:58.491 sys 0m0.122s 00:07:58.491 10:21:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.491 ************************************ 00:07:58.491 END TEST version 00:07:58.491 10:21:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 ************************************ 00:07:58.491 10:21:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:58.491 10:21:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:58.491 10:21:33 -- spdk/autotest.sh@194 -- # uname -s 00:07:58.491 10:21:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:58.491 10:21:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:58.491 10:21:33 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:58.491 10:21:33 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:58.491 10:21:33 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:58.491 10:21:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.491 10:21:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.491 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 ************************************ 00:07:58.491 START TEST spdk_dd 00:07:58.491 ************************************ 00:07:58.491 10:21:33 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:58.751 * Looking for test storage... 00:07:58.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.751 --rc genhtml_branch_coverage=1 00:07:58.751 --rc genhtml_function_coverage=1 00:07:58.751 --rc genhtml_legend=1 00:07:58.751 --rc geninfo_all_blocks=1 00:07:58.751 --rc geninfo_unexecuted_blocks=1 00:07:58.751 00:07:58.751 ' 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.751 --rc genhtml_branch_coverage=1 00:07:58.751 --rc genhtml_function_coverage=1 00:07:58.751 --rc genhtml_legend=1 00:07:58.751 --rc geninfo_all_blocks=1 00:07:58.751 --rc geninfo_unexecuted_blocks=1 00:07:58.751 00:07:58.751 ' 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.751 --rc genhtml_branch_coverage=1 00:07:58.751 --rc genhtml_function_coverage=1 00:07:58.751 --rc genhtml_legend=1 00:07:58.751 --rc geninfo_all_blocks=1 00:07:58.751 --rc geninfo_unexecuted_blocks=1 00:07:58.751 00:07:58.751 ' 00:07:58.751 10:21:33 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.751 --rc genhtml_branch_coverage=1 00:07:58.751 --rc genhtml_function_coverage=1 00:07:58.751 --rc genhtml_legend=1 00:07:58.751 --rc geninfo_all_blocks=1 00:07:58.751 --rc geninfo_unexecuted_blocks=1 00:07:58.751 00:07:58.751 ' 00:07:58.751 10:21:33 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.751 10:21:33 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.751 10:21:33 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.751 10:21:33 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.751 10:21:33 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.751 10:21:33 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:58.751 10:21:33 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.751 10:21:33 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:59.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:59.010 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:59.010 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:59.010 10:21:34 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:59.010 10:21:34 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:59.010 10:21:34 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:59.271 10:21:34 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:59.271 10:21:34 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:59.271 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:59.272 * spdk_dd linked to liburing 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:59.272 10:21:34 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:59.272 10:21:34 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:59.273 10:21:34 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:59.273 10:21:34 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:59.273 10:21:34 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:59.273 10:21:34 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:59.273 10:21:34 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:59.273 10:21:34 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:59.273 10:21:34 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:59.273 10:21:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:59.273 10:21:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.273 10:21:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:59.273 ************************************ 00:07:59.273 START TEST spdk_dd_basic_rw 00:07:59.273 ************************************ 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:59.273 * Looking for test storage... 00:07:59.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.273 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:59.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.534 --rc genhtml_branch_coverage=1 00:07:59.534 --rc genhtml_function_coverage=1 00:07:59.534 --rc genhtml_legend=1 00:07:59.534 --rc geninfo_all_blocks=1 00:07:59.534 --rc geninfo_unexecuted_blocks=1 00:07:59.534 00:07:59.534 ' 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:59.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.534 --rc genhtml_branch_coverage=1 00:07:59.534 --rc genhtml_function_coverage=1 00:07:59.534 --rc genhtml_legend=1 00:07:59.534 --rc geninfo_all_blocks=1 00:07:59.534 --rc geninfo_unexecuted_blocks=1 00:07:59.534 00:07:59.534 ' 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:59.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.534 --rc genhtml_branch_coverage=1 00:07:59.534 --rc genhtml_function_coverage=1 00:07:59.534 --rc genhtml_legend=1 00:07:59.534 --rc geninfo_all_blocks=1 00:07:59.534 --rc geninfo_unexecuted_blocks=1 00:07:59.534 00:07:59.534 ' 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:59.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.534 --rc genhtml_branch_coverage=1 00:07:59.534 --rc genhtml_function_coverage=1 00:07:59.534 --rc genhtml_legend=1 00:07:59.534 --rc geninfo_all_blocks=1 00:07:59.534 --rc geninfo_unexecuted_blocks=1 00:07:59.534 00:07:59.534 ' 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.534 10:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:59.535 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.536 ************************************ 00:07:59.536 START TEST dd_bs_lt_native_bs 00:07:59.536 ************************************ 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.536 10:21:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:59.536 { 00:07:59.536 "subsystems": [ 00:07:59.536 { 00:07:59.536 "subsystem": "bdev", 00:07:59.536 "config": [ 00:07:59.536 { 00:07:59.536 "params": { 00:07:59.536 "trtype": "pcie", 00:07:59.537 "traddr": "0000:00:10.0", 00:07:59.537 "name": "Nvme0" 00:07:59.537 }, 00:07:59.537 "method": "bdev_nvme_attach_controller" 00:07:59.537 }, 00:07:59.537 { 00:07:59.537 "method": "bdev_wait_for_examine" 00:07:59.537 } 00:07:59.537 ] 00:07:59.537 } 00:07:59.537 ] 00:07:59.537 } 00:07:59.796 [2024-12-10 10:21:34.773686] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.796 [2024-12-10 10:21:34.773796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72706 ] 00:07:59.796 [2024-12-10 10:21:34.914447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.796 [2024-12-10 10:21:34.956757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.796 [2024-12-10 10:21:34.990178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.053 [2024-12-10 10:21:35.081616] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:00.053 [2024-12-10 10:21:35.081700] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.053 [2024-12-10 10:21:35.153791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.053 00:08:00.053 real 0m0.511s 00:08:00.053 user 0m0.349s 00:08:00.053 sys 0m0.118s 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.053 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:00.053 ************************************ 00:08:00.054 END TEST dd_bs_lt_native_bs 00:08:00.054 ************************************ 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.054 ************************************ 00:08:00.054 START TEST dd_rw 00:08:00.054 ************************************ 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:00.054 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.621 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:00.621 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:00.621 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.621 10:21:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.880 [2024-12-10 10:21:35.894928] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.881 [2024-12-10 10:21:35.895029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:08:00.881 { 00:08:00.881 "subsystems": [ 00:08:00.881 { 00:08:00.881 "subsystem": "bdev", 00:08:00.881 "config": [ 00:08:00.881 { 00:08:00.881 "params": { 00:08:00.881 "trtype": "pcie", 00:08:00.881 "traddr": "0000:00:10.0", 00:08:00.881 "name": "Nvme0" 00:08:00.881 }, 00:08:00.881 "method": "bdev_nvme_attach_controller" 00:08:00.881 }, 00:08:00.881 { 00:08:00.881 "method": "bdev_wait_for_examine" 00:08:00.881 } 00:08:00.881 ] 00:08:00.881 } 00:08:00.881 ] 00:08:00.881 } 00:08:00.881 [2024-12-10 10:21:36.036170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.881 [2024-12-10 10:21:36.068068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.881 [2024-12-10 10:21:36.095753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.140  [2024-12-10T10:21:36.367Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:01.140 00:08:01.140 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:01.140 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:01.140 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.140 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.399 [2024-12-10 10:21:36.379366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.399 [2024-12-10 10:21:36.379495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72756 ] 00:08:01.399 { 00:08:01.399 "subsystems": [ 00:08:01.399 { 00:08:01.399 "subsystem": "bdev", 00:08:01.399 "config": [ 00:08:01.399 { 00:08:01.399 "params": { 00:08:01.399 "trtype": "pcie", 00:08:01.399 "traddr": "0000:00:10.0", 00:08:01.399 "name": "Nvme0" 00:08:01.399 }, 00:08:01.399 "method": "bdev_nvme_attach_controller" 00:08:01.399 }, 00:08:01.399 { 00:08:01.399 "method": "bdev_wait_for_examine" 00:08:01.399 } 00:08:01.399 ] 00:08:01.399 } 00:08:01.399 ] 00:08:01.399 } 00:08:01.399 [2024-12-10 10:21:36.518759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.399 [2024-12-10 10:21:36.550664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.399 [2024-12-10 10:21:36.578148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.658  [2024-12-10T10:21:36.885Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:01.658 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.658 10:21:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.658 [2024-12-10 10:21:36.853757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.658 [2024-12-10 10:21:36.853846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72772 ] 00:08:01.658 { 00:08:01.658 "subsystems": [ 00:08:01.658 { 00:08:01.658 "subsystem": "bdev", 00:08:01.658 "config": [ 00:08:01.658 { 00:08:01.658 "params": { 00:08:01.658 "trtype": "pcie", 00:08:01.658 "traddr": "0000:00:10.0", 00:08:01.658 "name": "Nvme0" 00:08:01.658 }, 00:08:01.658 "method": "bdev_nvme_attach_controller" 00:08:01.658 }, 00:08:01.658 { 00:08:01.658 "method": "bdev_wait_for_examine" 00:08:01.658 } 00:08:01.658 ] 00:08:01.658 } 00:08:01.658 ] 00:08:01.658 } 00:08:01.917 [2024-12-10 10:21:36.992437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.917 [2024-12-10 10:21:37.025392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.917 [2024-12-10 10:21:37.053691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.917  [2024-12-10T10:21:37.405Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:02.178 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:02.178 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.745 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:02.745 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:02.745 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.745 10:21:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.745 [2024-12-10 10:21:37.862603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:02.745 [2024-12-10 10:21:37.862708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72790 ] 00:08:02.745 { 00:08:02.745 "subsystems": [ 00:08:02.745 { 00:08:02.745 "subsystem": "bdev", 00:08:02.745 "config": [ 00:08:02.745 { 00:08:02.745 "params": { 00:08:02.745 "trtype": "pcie", 00:08:02.745 "traddr": "0000:00:10.0", 00:08:02.745 "name": "Nvme0" 00:08:02.745 }, 00:08:02.745 "method": "bdev_nvme_attach_controller" 00:08:02.745 }, 00:08:02.745 { 00:08:02.745 "method": "bdev_wait_for_examine" 00:08:02.745 } 00:08:02.745 ] 00:08:02.745 } 00:08:02.745 ] 00:08:02.745 } 00:08:03.004 [2024-12-10 10:21:38.001101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.004 [2024-12-10 10:21:38.035205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.004 [2024-12-10 10:21:38.063306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.004  [2024-12-10T10:21:38.490Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:03.263 00:08:03.263 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:03.263 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:03.263 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.263 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.263 [2024-12-10 10:21:38.337374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.263 [2024-12-10 10:21:38.337519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72799 ] 00:08:03.263 { 00:08:03.263 "subsystems": [ 00:08:03.263 { 00:08:03.263 "subsystem": "bdev", 00:08:03.263 "config": [ 00:08:03.263 { 00:08:03.263 "params": { 00:08:03.263 "trtype": "pcie", 00:08:03.263 "traddr": "0000:00:10.0", 00:08:03.263 "name": "Nvme0" 00:08:03.263 }, 00:08:03.263 "method": "bdev_nvme_attach_controller" 00:08:03.263 }, 00:08:03.263 { 00:08:03.263 "method": "bdev_wait_for_examine" 00:08:03.263 } 00:08:03.263 ] 00:08:03.263 } 00:08:03.263 ] 00:08:03.263 } 00:08:03.263 [2024-12-10 10:21:38.476193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.522 [2024-12-10 10:21:38.509068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.522 [2024-12-10 10:21:38.536745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.522  [2024-12-10T10:21:38.749Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:03.522 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.782 10:21:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.782 [2024-12-10 10:21:38.813779] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.782 [2024-12-10 10:21:38.813885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72814 ] 00:08:03.782 { 00:08:03.782 "subsystems": [ 00:08:03.782 { 00:08:03.782 "subsystem": "bdev", 00:08:03.782 "config": [ 00:08:03.782 { 00:08:03.782 "params": { 00:08:03.782 "trtype": "pcie", 00:08:03.782 "traddr": "0000:00:10.0", 00:08:03.782 "name": "Nvme0" 00:08:03.782 }, 00:08:03.782 "method": "bdev_nvme_attach_controller" 00:08:03.782 }, 00:08:03.782 { 00:08:03.782 "method": "bdev_wait_for_examine" 00:08:03.782 } 00:08:03.782 ] 00:08:03.782 } 00:08:03.782 ] 00:08:03.782 } 00:08:03.782 [2024-12-10 10:21:38.952713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.782 [2024-12-10 10:21:38.987969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.041 [2024-12-10 10:21:39.018667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.041  [2024-12-10T10:21:39.268Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.041 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:04.041 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.608 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:04.608 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.608 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.608 10:21:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.608 [2024-12-10 10:21:39.802065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.608 [2024-12-10 10:21:39.802169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72833 ] 00:08:04.608 { 00:08:04.608 "subsystems": [ 00:08:04.608 { 00:08:04.608 "subsystem": "bdev", 00:08:04.608 "config": [ 00:08:04.608 { 00:08:04.608 "params": { 00:08:04.608 "trtype": "pcie", 00:08:04.608 "traddr": "0000:00:10.0", 00:08:04.608 "name": "Nvme0" 00:08:04.608 }, 00:08:04.608 "method": "bdev_nvme_attach_controller" 00:08:04.608 }, 00:08:04.608 { 00:08:04.608 "method": "bdev_wait_for_examine" 00:08:04.608 } 00:08:04.608 ] 00:08:04.608 } 00:08:04.608 ] 00:08:04.608 } 00:08:04.867 [2024-12-10 10:21:39.940989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.867 [2024-12-10 10:21:39.973366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.867 [2024-12-10 10:21:40.002116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.867  [2024-12-10T10:21:40.353Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:05.126 00:08:05.126 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:05.126 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.126 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.126 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.126 [2024-12-10 10:21:40.278816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.126 [2024-12-10 10:21:40.278922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72847 ] 00:08:05.126 { 00:08:05.126 "subsystems": [ 00:08:05.126 { 00:08:05.126 "subsystem": "bdev", 00:08:05.126 "config": [ 00:08:05.126 { 00:08:05.126 "params": { 00:08:05.126 "trtype": "pcie", 00:08:05.126 "traddr": "0000:00:10.0", 00:08:05.126 "name": "Nvme0" 00:08:05.126 }, 00:08:05.126 "method": "bdev_nvme_attach_controller" 00:08:05.126 }, 00:08:05.126 { 00:08:05.126 "method": "bdev_wait_for_examine" 00:08:05.126 } 00:08:05.126 ] 00:08:05.126 } 00:08:05.126 ] 00:08:05.126 } 00:08:05.385 [2024-12-10 10:21:40.416727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.385 [2024-12-10 10:21:40.452909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.385 [2024-12-10 10:21:40.483336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.385  [2024-12-10T10:21:40.871Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:05.644 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.644 10:21:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.644 { 00:08:05.644 "subsystems": [ 00:08:05.644 { 00:08:05.644 "subsystem": "bdev", 00:08:05.644 "config": [ 00:08:05.644 { 00:08:05.644 "params": { 00:08:05.644 "trtype": "pcie", 00:08:05.644 "traddr": "0000:00:10.0", 00:08:05.644 "name": "Nvme0" 00:08:05.644 }, 00:08:05.644 "method": "bdev_nvme_attach_controller" 00:08:05.644 }, 00:08:05.644 { 00:08:05.644 "method": "bdev_wait_for_examine" 00:08:05.644 } 00:08:05.644 ] 00:08:05.644 } 00:08:05.644 ] 00:08:05.644 } 00:08:05.644 [2024-12-10 10:21:40.763535] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.645 [2024-12-10 10:21:40.763640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:08:05.903 [2024-12-10 10:21:40.897611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.903 [2024-12-10 10:21:40.930900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.903 [2024-12-10 10:21:40.960406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.903  [2024-12-10T10:21:41.389Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.162 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:06.162 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:06.730 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:06.730 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.730 10:21:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 { 00:08:06.730 "subsystems": [ 00:08:06.730 { 00:08:06.730 "subsystem": "bdev", 00:08:06.730 "config": [ 00:08:06.730 { 00:08:06.730 "params": { 00:08:06.730 "trtype": "pcie", 00:08:06.730 "traddr": "0000:00:10.0", 00:08:06.730 "name": "Nvme0" 00:08:06.730 }, 00:08:06.730 "method": "bdev_nvme_attach_controller" 00:08:06.730 }, 00:08:06.730 { 00:08:06.730 "method": "bdev_wait_for_examine" 00:08:06.730 } 00:08:06.730 ] 00:08:06.730 } 00:08:06.730 ] 00:08:06.730 } 00:08:06.730 [2024-12-10 10:21:41.752439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.731 [2024-12-10 10:21:41.752545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72881 ] 00:08:06.731 [2024-12-10 10:21:41.892777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.731 [2024-12-10 10:21:41.929646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.990 [2024-12-10 10:21:41.960930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.990  [2024-12-10T10:21:42.217Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:06.990 00:08:06.990 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:06.990 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:06.990 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.990 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.249 { 00:08:07.249 "subsystems": [ 00:08:07.249 { 00:08:07.249 "subsystem": "bdev", 00:08:07.249 "config": [ 00:08:07.249 { 00:08:07.249 "params": { 00:08:07.249 "trtype": "pcie", 00:08:07.249 "traddr": "0000:00:10.0", 00:08:07.249 "name": "Nvme0" 00:08:07.249 }, 00:08:07.249 "method": "bdev_nvme_attach_controller" 00:08:07.249 }, 00:08:07.249 { 00:08:07.249 "method": "bdev_wait_for_examine" 00:08:07.249 } 00:08:07.249 ] 00:08:07.249 } 00:08:07.249 ] 00:08:07.249 } 00:08:07.249 [2024-12-10 10:21:42.234772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.249 [2024-12-10 10:21:42.234884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72895 ] 00:08:07.249 [2024-12-10 10:21:42.373032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.249 [2024-12-10 10:21:42.404837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.249 [2024-12-10 10:21:42.432399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.508  [2024-12-10T10:21:42.735Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:07.508 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.508 10:21:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.508 [2024-12-10 10:21:42.716210] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.508 [2024-12-10 10:21:42.716306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72910 ] 00:08:07.508 { 00:08:07.508 "subsystems": [ 00:08:07.508 { 00:08:07.508 "subsystem": "bdev", 00:08:07.508 "config": [ 00:08:07.508 { 00:08:07.508 "params": { 00:08:07.508 "trtype": "pcie", 00:08:07.508 "traddr": "0000:00:10.0", 00:08:07.508 "name": "Nvme0" 00:08:07.508 }, 00:08:07.508 "method": "bdev_nvme_attach_controller" 00:08:07.508 }, 00:08:07.508 { 00:08:07.508 "method": "bdev_wait_for_examine" 00:08:07.508 } 00:08:07.508 ] 00:08:07.508 } 00:08:07.508 ] 00:08:07.508 } 00:08:07.767 [2024-12-10 10:21:42.852426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.767 [2024-12-10 10:21:42.884458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.767 [2024-12-10 10:21:42.911584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.025  [2024-12-10T10:21:43.252Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:08.025 00:08:08.025 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:08.025 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:08.025 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:08.025 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:08.026 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:08.026 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:08.026 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:08.026 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.592 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:08.592 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:08.592 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.592 10:21:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.592 { 00:08:08.592 "subsystems": [ 00:08:08.592 { 00:08:08.592 "subsystem": "bdev", 00:08:08.592 "config": [ 00:08:08.592 { 00:08:08.592 "params": { 00:08:08.592 "trtype": "pcie", 00:08:08.592 "traddr": "0000:00:10.0", 00:08:08.592 "name": "Nvme0" 00:08:08.592 }, 00:08:08.592 "method": "bdev_nvme_attach_controller" 00:08:08.592 }, 00:08:08.592 { 00:08:08.592 "method": "bdev_wait_for_examine" 00:08:08.592 } 00:08:08.592 ] 00:08:08.592 } 00:08:08.592 ] 00:08:08.592 } 00:08:08.592 [2024-12-10 10:21:43.668521] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.592 [2024-12-10 10:21:43.668626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72929 ] 00:08:08.592 [2024-12-10 10:21:43.808084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.851 [2024-12-10 10:21:43.841397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.851 [2024-12-10 10:21:43.868779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.851  [2024-12-10T10:21:44.337Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:09.110 00:08:09.110 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:09.110 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:09.110 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.110 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.110 { 00:08:09.110 "subsystems": [ 00:08:09.110 { 00:08:09.110 "subsystem": "bdev", 00:08:09.110 "config": [ 00:08:09.110 { 00:08:09.110 "params": { 00:08:09.110 "trtype": "pcie", 00:08:09.110 "traddr": "0000:00:10.0", 00:08:09.110 "name": "Nvme0" 00:08:09.110 }, 00:08:09.110 "method": "bdev_nvme_attach_controller" 00:08:09.110 }, 00:08:09.110 { 00:08:09.110 "method": "bdev_wait_for_examine" 00:08:09.110 } 00:08:09.110 ] 00:08:09.110 } 00:08:09.110 ] 00:08:09.110 } 00:08:09.110 [2024-12-10 10:21:44.147670] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.110 [2024-12-10 10:21:44.147793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:08:09.110 [2024-12-10 10:21:44.284854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.110 [2024-12-10 10:21:44.316662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.369 [2024-12-10 10:21:44.345410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.369  [2024-12-10T10:21:44.596Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:09.369 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:09.369 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:09.370 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.370 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:09.370 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.370 10:21:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.629 [2024-12-10 10:21:44.632069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.629 [2024-12-10 10:21:44.632192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72953 ] 00:08:09.629 { 00:08:09.629 "subsystems": [ 00:08:09.629 { 00:08:09.629 "subsystem": "bdev", 00:08:09.629 "config": [ 00:08:09.629 { 00:08:09.629 "params": { 00:08:09.629 "trtype": "pcie", 00:08:09.629 "traddr": "0000:00:10.0", 00:08:09.629 "name": "Nvme0" 00:08:09.629 }, 00:08:09.629 "method": "bdev_nvme_attach_controller" 00:08:09.629 }, 00:08:09.629 { 00:08:09.629 "method": "bdev_wait_for_examine" 00:08:09.629 } 00:08:09.629 ] 00:08:09.629 } 00:08:09.629 ] 00:08:09.629 } 00:08:09.629 [2024-12-10 10:21:44.771318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.629 [2024-12-10 10:21:44.804758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.629 [2024-12-10 10:21:44.832359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.888  [2024-12-10T10:21:45.115Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:09.888 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:09.888 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:10.454 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:10.454 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.454 10:21:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 { 00:08:10.454 "subsystems": [ 00:08:10.454 { 00:08:10.454 "subsystem": "bdev", 00:08:10.454 "config": [ 00:08:10.454 { 00:08:10.454 "params": { 00:08:10.454 "trtype": "pcie", 00:08:10.454 "traddr": "0000:00:10.0", 00:08:10.454 "name": "Nvme0" 00:08:10.454 }, 00:08:10.454 "method": "bdev_nvme_attach_controller" 00:08:10.454 }, 00:08:10.454 { 00:08:10.454 "method": "bdev_wait_for_examine" 00:08:10.454 } 00:08:10.454 ] 00:08:10.454 } 00:08:10.454 ] 00:08:10.454 } 00:08:10.454 [2024-12-10 10:21:45.580829] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.454 [2024-12-10 10:21:45.580946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72972 ] 00:08:10.713 [2024-12-10 10:21:45.720306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.713 [2024-12-10 10:21:45.756677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.713 [2024-12-10 10:21:45.784549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.713  [2024-12-10T10:21:46.199Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:10.972 00:08:10.972 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:10.972 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:10.972 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.972 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.972 [2024-12-10 10:21:46.048923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.972 [2024-12-10 10:21:46.049009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72990 ] 00:08:10.972 { 00:08:10.972 "subsystems": [ 00:08:10.972 { 00:08:10.972 "subsystem": "bdev", 00:08:10.972 "config": [ 00:08:10.972 { 00:08:10.972 "params": { 00:08:10.972 "trtype": "pcie", 00:08:10.972 "traddr": "0000:00:10.0", 00:08:10.972 "name": "Nvme0" 00:08:10.972 }, 00:08:10.972 "method": "bdev_nvme_attach_controller" 00:08:10.972 }, 00:08:10.972 { 00:08:10.972 "method": "bdev_wait_for_examine" 00:08:10.972 } 00:08:10.972 ] 00:08:10.972 } 00:08:10.972 ] 00:08:10.972 } 00:08:10.972 [2024-12-10 10:21:46.180177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.231 [2024-12-10 10:21:46.213543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.231 [2024-12-10 10:21:46.240837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.231  [2024-12-10T10:21:46.458Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:11.231 00:08:11.231 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.491 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.491 [2024-12-10 10:21:46.516022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.491 [2024-12-10 10:21:46.516120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73001 ] 00:08:11.491 { 00:08:11.491 "subsystems": [ 00:08:11.491 { 00:08:11.491 "subsystem": "bdev", 00:08:11.491 "config": [ 00:08:11.491 { 00:08:11.491 "params": { 00:08:11.491 "trtype": "pcie", 00:08:11.491 "traddr": "0000:00:10.0", 00:08:11.491 "name": "Nvme0" 00:08:11.491 }, 00:08:11.491 "method": "bdev_nvme_attach_controller" 00:08:11.491 }, 00:08:11.491 { 00:08:11.491 "method": "bdev_wait_for_examine" 00:08:11.491 } 00:08:11.491 ] 00:08:11.491 } 00:08:11.491 ] 00:08:11.491 } 00:08:11.491 [2024-12-10 10:21:46.649647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.491 [2024-12-10 10:21:46.681319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.491 [2024-12-10 10:21:46.711938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.749  [2024-12-10T10:21:46.976Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:11.749 00:08:11.749 00:08:11.749 real 0m11.658s 00:08:11.749 user 0m8.599s 00:08:11.749 sys 0m3.586s 00:08:11.749 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.749 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.749 ************************************ 00:08:11.749 END TEST dd_rw 00:08:11.749 ************************************ 00:08:11.749 10:21:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:11.749 10:21:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.749 10:21:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.749 10:21:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.009 ************************************ 00:08:12.009 START TEST dd_rw_offset 00:08:12.009 ************************************ 00:08:12.009 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:08:12.009 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:12.009 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:12.009 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:12.009 10:21:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:12.009 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:12.009 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=zkge1m4pr821upqiifixbhu3unz7ekmiihsjzqtcr4c3ae6ohuh31yhtgu397ddewsi30r1qscvesyeuuvrvbq4ee2yl5lmjqade2umpzom0ab49rd5gadu8k9oqy8k7ovgsbi25fl8bh1na50e4aiuzd7e1bi40dhwnl8rk1f87zvg7z38mc2frqclp9ebbegekef1e0g3sf3y76tuja8ov3k2i2thqursd8rbwhu70boqf3kw5opkgb4e4jyk3yszw12hdbflrb1kmsydmlrx2qs5owmgjxzj35zyxuqpusj3bxsva9xu1698j3f2j2vyf6b75pqe6ogy04xo8xti9pelygh74p0do0mslammnu0zq4s2xx03q7vettamqhcrx728sae8yj01e3aeaka1nudwip9cqbf359vrwbuafhdrrrqxb9ys88o0irs3q8l5oe3ky7lgvbldfskbq5xv85s9w097qwwzden6cqtpupn1k51n8w1blvi5xe25hpdrl0z6l45lci75rt8cbfcxsx4jt58v9viockjp4rfodyq43neru47nh8icytfidnexeo7sx56dusb0ie0c5mbtwhdfd5u9o48bvfrg84o9jp09hev5k411frv10f9vft0vhn5wah2ns6xzl1mybu745qecno8wablge6exwjykzjak03kzvjb34jgmtkpl4116vd5zlse18lly4yhxzpteofzlr6olhohvh9ui2lfcfpucv0pocxbt92pgkcgnnyui9f9sdhcit2mof3pqlrbd1kai6m35tjrmypy4ntcnjzpkrzpw8qyknt8nwx35fagfdsfuvetraledkghkjhra9z9p339zf0usdyja0ee5zes2esjrrgdndaxxa49inwfad1psnxzh42s2h91p8qtz9fnbv1sxqevp64usj2lobvln4lk6jzu0fgid2ygpto7bxdgk768o3ddvwz2zeo6fvfpkx3d0zfp59zs2awtb1lvw8oovmy3b2si9wlwcqga9wo20zkcpdr18jqcglg1elopcmws6jzxcbpafrf49qd5jzb1y2a00latsquwk929yvorfferad3tth2j9otwbfb5ybnwib1hwyzrpsl4fein8bt974h60znmr3ydedn75ozplinm5mzr3x88udtj4h8nvysy27aqhn9xjril5zu2fcdpf00jee69til46xjuow3cn4wwb2vyleqwbsh0qabqaj5btuhkrkrn36ol8xunk6m10f5ioch5vb0kwk4c1cjm9gkfr1przrhhn23m1sxnqvybftlrjdyu69zdb3upsiplaco5wuf0jfaw36piwct8kq8p6da79npo63v22djbph9djt8wnbyr811y79umo5qvmogvrkd7h2ppfvvconskqtz2bkwlqvrggxfdhrw8133cvqygbe9rg8rzyiuoxzickkcdq5gb1gf5gqxsj2vuj7r0jcw8wcsx89oc65xa9jmzy4hxp28f8smjiqamnv7pyyb1btup87l6wtpak6ukcxlqqia9y9vi8oap6otvt7oocecfii6yvohzbwzu9i8plvx9aab3xv6biw0w0npgxjcdimzsw7qh7sai07c7mbjvg31a8fe626cxud29r0qdxztjf4t9uijx9h5mp63ir7wjrrdnw3epffw63iuqdtmdmjrdplh8yxsuvywulzf4pazyd0lcq1hk3idg4lf21qfgstb8vgm5fnl1l78suj0tgrchuo2dmky728cvp4omja3rpcg9i7d7ew65txvyi07n06zqbh8ni9hkokishn3z0chrxjl6omktuds7ysehp2izftbikjcahmxw65bq997qzan3k61aa0954g5icg9ipich8vxd4hwcb5em0j3qacs5f4pg9sqlsovjznal6rg82s1mo95stgptcm9lyloiuzh0htolfs25v7ak53fi147mkrpk4v0zbj33gtw6g8tneh8xrqjdmy78c9r36cqwlsrdhmiogbgua6bzuz3xjhwff4bvg9kcrbwyqaogp26afr9zye2xc8oyw1jrcsnl1wtzqw2pbe9z7sbaaw8oiphagcq4jzaroc3u2irjsfcq9fsgou0wzpbgxrru14aksq0z68hb3tfvh1wgsgtv9o0ltp38wtelkwhszjyslv418m9q6zz92coq33d04hxxxfrzloe7bt07clgzn8ssgyhstv5w3oxkex0a4o0ybo2g0n99egil7vam30qqz0e7i0je24nm8yutmwyzueb1szs5wwnga03hhkq0tgqz77bk9l67dzugepkzpfyanravbvhlhiqcy0t8t0uutqcgijrs4nclybqd9q0qnxztwfww0x3a7dqcbk3w2i829hlk0gmlmi8k87kmfjddm1vjszxmxatwl97lgxrpvwgkez1lwnlnig5zpkpo1t5d9e538v54t8l1syzrrqcr0l7fjmo8cqupvxz7vw2h9bb8fw8gr0ei58q2fatzj7a6v4bdlf9zzg3gi6ce9wpig9cfwal07icfvxf2i4opo08rf4omw1wrj0pr74pszi1q6nx55s3unkft7u24i4d13awtt8jtc969cepe06symccek3ruvrof4g66pwh7gmaidvgg739ow2il1zl6aqry2jv75yuyq6zx582knyls2foqlsu2jxt3q0e1slk5rfaqegthdtaptvxmsmw960hp8d0pbpb53v5ibqrl3v8ygb6669v1v6ejcy8ynkhzxjue0me2ybw6pk6qlf5yny15gkf2ru54eq690mcmg7fri88alvpnp9g8r3i7i0agkmbgsk7tu5mtnzmyto3buicojz6nlmrzm8grri81xycta4ez2uv6b4pgdfgnqkly8jkc9s9wc8lpetobnp8dvnrvcj3bos65y2vbmyuu5xwpoy91fxpco2oo5hrn241d68rh2vg9i8dcprddepmyawf9fgxpl6fsxucc5xy2fd8xzzlg1yxageiz3zeh4eij50xuepn2kiew42ymllrpsn870behayoyj4hny1z6x15il5kcub5gsj2z8711wu75u4fh0ex170e6i66fi0gvdqatjcqmjj3zk8fgnrlolwl49jjfju053pladgx8491qwhzxfizuse00l3wpo5kkimlcwydvxjmu7zs6z5552l4buwm2xyziwgyitscw7f8sc5zxiliku3m8efm3vlwfpi8jtmx940ztot7g8v5rnb2052apoj4dvb8c30c7wx9bgtwonfin0mmgipkpyxolmqpayo09l5yqr9d9w6iubrcjgwmvm7h5q2wp4xwtpzkm6l5qk12l7c5jmur2mr3lazgzd0b71sjeq1z0d9j88fgmqyoaj5jb8b584fozgamecbbwvj657tq98mkpvv7fqcv04xms4zg4aipruc88xaa65watuqbe0xdaz65pbt2ck6iys7f1lw2x24kpk88ppdvhg2tytkdo7ygfwu9yftfb5ppyby013hdfxjj758bz7vhb2flzbgdc5ocgkcumuvomc89gwajniliaba4todukjcswwiy6dbi99xz0enzjz5xiv9yuyn0n62ikm7wz9dd1gftil4x6mb4niox4nx771axayo3xdmvgvjotnbeyv16rfmrjzcth3lnk0y8gbdqe8xml5m0v3mf3i8pboyibptgv47s83vth25g9hi9ykx2mzvqbmf12q00vo9ndlfrpintvrlmk9u150y7gyfvrggspj8p9rkigqgt1tzmpl3j7rfjb7ufzyiu6xahpmyv2ibnq51wf138w9p8nbg90a6mvif4bvwjva79d8d7xdvtl42v1mr2m6tvnr0c5dnzlrondgnlyxrkgs172sqsoaup1i7jw4p284s8xdk6zf6ke1eneo2mohb98d4rulxlkfdrf3px0qlootuu89usbrdme5f4o58zxgxtc4b0jrq9me016r5hhnk6zti7lqfebbdpzk7do78c8w3bep8xpfdb91y3zg7lbas8ynohiprvjqlr2mo005gaze441ly8jbk6s00maqv4sv7xs81 00:08:12.009 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:12.009 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:12.009 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:12.009 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:12.009 { 00:08:12.009 "subsystems": [ 00:08:12.009 { 00:08:12.009 "subsystem": "bdev", 00:08:12.009 "config": [ 00:08:12.009 { 00:08:12.009 "params": { 00:08:12.009 "trtype": "pcie", 00:08:12.009 "traddr": "0000:00:10.0", 00:08:12.009 "name": "Nvme0" 00:08:12.009 }, 00:08:12.009 "method": "bdev_nvme_attach_controller" 00:08:12.009 }, 00:08:12.009 { 00:08:12.009 "method": "bdev_wait_for_examine" 00:08:12.009 } 00:08:12.009 ] 00:08:12.009 } 00:08:12.009 ] 00:08:12.009 } 00:08:12.009 [2024-12-10 10:21:47.088834] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:12.009 [2024-12-10 10:21:47.088931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73037 ] 00:08:12.009 [2024-12-10 10:21:47.227904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.268 [2024-12-10 10:21:47.261053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.268 [2024-12-10 10:21:47.288559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.268  [2024-12-10T10:21:47.754Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:12.527 00:08:12.527 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:12.527 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:12.527 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:12.527 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:12.527 { 00:08:12.527 "subsystems": [ 00:08:12.527 { 00:08:12.527 "subsystem": "bdev", 00:08:12.527 "config": [ 00:08:12.527 { 00:08:12.527 "params": { 00:08:12.527 "trtype": "pcie", 00:08:12.527 "traddr": "0000:00:10.0", 00:08:12.527 "name": "Nvme0" 00:08:12.527 }, 00:08:12.527 "method": "bdev_nvme_attach_controller" 00:08:12.527 }, 00:08:12.527 { 00:08:12.527 "method": "bdev_wait_for_examine" 00:08:12.527 } 00:08:12.527 ] 00:08:12.527 } 00:08:12.527 ] 00:08:12.527 } 00:08:12.527 [2024-12-10 10:21:47.564300] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:12.527 [2024-12-10 10:21:47.564391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73045 ] 00:08:12.527 [2024-12-10 10:21:47.704317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.527 [2024-12-10 10:21:47.738039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.787 [2024-12-10 10:21:47.769950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.787  [2024-12-10T10:21:48.014Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:12.787 00:08:12.787 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:12.788 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ zkge1m4pr821upqiifixbhu3unz7ekmiihsjzqtcr4c3ae6ohuh31yhtgu397ddewsi30r1qscvesyeuuvrvbq4ee2yl5lmjqade2umpzom0ab49rd5gadu8k9oqy8k7ovgsbi25fl8bh1na50e4aiuzd7e1bi40dhwnl8rk1f87zvg7z38mc2frqclp9ebbegekef1e0g3sf3y76tuja8ov3k2i2thqursd8rbwhu70boqf3kw5opkgb4e4jyk3yszw12hdbflrb1kmsydmlrx2qs5owmgjxzj35zyxuqpusj3bxsva9xu1698j3f2j2vyf6b75pqe6ogy04xo8xti9pelygh74p0do0mslammnu0zq4s2xx03q7vettamqhcrx728sae8yj01e3aeaka1nudwip9cqbf359vrwbuafhdrrrqxb9ys88o0irs3q8l5oe3ky7lgvbldfskbq5xv85s9w097qwwzden6cqtpupn1k51n8w1blvi5xe25hpdrl0z6l45lci75rt8cbfcxsx4jt58v9viockjp4rfodyq43neru47nh8icytfidnexeo7sx56dusb0ie0c5mbtwhdfd5u9o48bvfrg84o9jp09hev5k411frv10f9vft0vhn5wah2ns6xzl1mybu745qecno8wablge6exwjykzjak03kzvjb34jgmtkpl4116vd5zlse18lly4yhxzpteofzlr6olhohvh9ui2lfcfpucv0pocxbt92pgkcgnnyui9f9sdhcit2mof3pqlrbd1kai6m35tjrmypy4ntcnjzpkrzpw8qyknt8nwx35fagfdsfuvetraledkghkjhra9z9p339zf0usdyja0ee5zes2esjrrgdndaxxa49inwfad1psnxzh42s2h91p8qtz9fnbv1sxqevp64usj2lobvln4lk6jzu0fgid2ygpto7bxdgk768o3ddvwz2zeo6fvfpkx3d0zfp59zs2awtb1lvw8oovmy3b2si9wlwcqga9wo20zkcpdr18jqcglg1elopcmws6jzxcbpafrf49qd5jzb1y2a00latsquwk929yvorfferad3tth2j9otwbfb5ybnwib1hwyzrpsl4fein8bt974h60znmr3ydedn75ozplinm5mzr3x88udtj4h8nvysy27aqhn9xjril5zu2fcdpf00jee69til46xjuow3cn4wwb2vyleqwbsh0qabqaj5btuhkrkrn36ol8xunk6m10f5ioch5vb0kwk4c1cjm9gkfr1przrhhn23m1sxnqvybftlrjdyu69zdb3upsiplaco5wuf0jfaw36piwct8kq8p6da79npo63v22djbph9djt8wnbyr811y79umo5qvmogvrkd7h2ppfvvconskqtz2bkwlqvrggxfdhrw8133cvqygbe9rg8rzyiuoxzickkcdq5gb1gf5gqxsj2vuj7r0jcw8wcsx89oc65xa9jmzy4hxp28f8smjiqamnv7pyyb1btup87l6wtpak6ukcxlqqia9y9vi8oap6otvt7oocecfii6yvohzbwzu9i8plvx9aab3xv6biw0w0npgxjcdimzsw7qh7sai07c7mbjvg31a8fe626cxud29r0qdxztjf4t9uijx9h5mp63ir7wjrrdnw3epffw63iuqdtmdmjrdplh8yxsuvywulzf4pazyd0lcq1hk3idg4lf21qfgstb8vgm5fnl1l78suj0tgrchuo2dmky728cvp4omja3rpcg9i7d7ew65txvyi07n06zqbh8ni9hkokishn3z0chrxjl6omktuds7ysehp2izftbikjcahmxw65bq997qzan3k61aa0954g5icg9ipich8vxd4hwcb5em0j3qacs5f4pg9sqlsovjznal6rg82s1mo95stgptcm9lyloiuzh0htolfs25v7ak53fi147mkrpk4v0zbj33gtw6g8tneh8xrqjdmy78c9r36cqwlsrdhmiogbgua6bzuz3xjhwff4bvg9kcrbwyqaogp26afr9zye2xc8oyw1jrcsnl1wtzqw2pbe9z7sbaaw8oiphagcq4jzaroc3u2irjsfcq9fsgou0wzpbgxrru14aksq0z68hb3tfvh1wgsgtv9o0ltp38wtelkwhszjyslv418m9q6zz92coq33d04hxxxfrzloe7bt07clgzn8ssgyhstv5w3oxkex0a4o0ybo2g0n99egil7vam30qqz0e7i0je24nm8yutmwyzueb1szs5wwnga03hhkq0tgqz77bk9l67dzugepkzpfyanravbvhlhiqcy0t8t0uutqcgijrs4nclybqd9q0qnxztwfww0x3a7dqcbk3w2i829hlk0gmlmi8k87kmfjddm1vjszxmxatwl97lgxrpvwgkez1lwnlnig5zpkpo1t5d9e538v54t8l1syzrrqcr0l7fjmo8cqupvxz7vw2h9bb8fw8gr0ei58q2fatzj7a6v4bdlf9zzg3gi6ce9wpig9cfwal07icfvxf2i4opo08rf4omw1wrj0pr74pszi1q6nx55s3unkft7u24i4d13awtt8jtc969cepe06symccek3ruvrof4g66pwh7gmaidvgg739ow2il1zl6aqry2jv75yuyq6zx582knyls2foqlsu2jxt3q0e1slk5rfaqegthdtaptvxmsmw960hp8d0pbpb53v5ibqrl3v8ygb6669v1v6ejcy8ynkhzxjue0me2ybw6pk6qlf5yny15gkf2ru54eq690mcmg7fri88alvpnp9g8r3i7i0agkmbgsk7tu5mtnzmyto3buicojz6nlmrzm8grri81xycta4ez2uv6b4pgdfgnqkly8jkc9s9wc8lpetobnp8dvnrvcj3bos65y2vbmyuu5xwpoy91fxpco2oo5hrn241d68rh2vg9i8dcprddepmyawf9fgxpl6fsxucc5xy2fd8xzzlg1yxageiz3zeh4eij50xuepn2kiew42ymllrpsn870behayoyj4hny1z6x15il5kcub5gsj2z8711wu75u4fh0ex170e6i66fi0gvdqatjcqmjj3zk8fgnrlolwl49jjfju053pladgx8491qwhzxfizuse00l3wpo5kkimlcwydvxjmu7zs6z5552l4buwm2xyziwgyitscw7f8sc5zxiliku3m8efm3vlwfpi8jtmx940ztot7g8v5rnb2052apoj4dvb8c30c7wx9bgtwonfin0mmgipkpyxolmqpayo09l5yqr9d9w6iubrcjgwmvm7h5q2wp4xwtpzkm6l5qk12l7c5jmur2mr3lazgzd0b71sjeq1z0d9j88fgmqyoaj5jb8b584fozgamecbbwvj657tq98mkpvv7fqcv04xms4zg4aipruc88xaa65watuqbe0xdaz65pbt2ck6iys7f1lw2x24kpk88ppdvhg2tytkdo7ygfwu9yftfb5ppyby013hdfxjj758bz7vhb2flzbgdc5ocgkcumuvomc89gwajniliaba4todukjcswwiy6dbi99xz0enzjz5xiv9yuyn0n62ikm7wz9dd1gftil4x6mb4niox4nx771axayo3xdmvgvjotnbeyv16rfmrjzcth3lnk0y8gbdqe8xml5m0v3mf3i8pboyibptgv47s83vth25g9hi9ykx2mzvqbmf12q00vo9ndlfrpintvrlmk9u150y7gyfvrggspj8p9rkigqgt1tzmpl3j7rfjb7ufzyiu6xahpmyv2ibnq51wf138w9p8nbg90a6mvif4bvwjva79d8d7xdvtl42v1mr2m6tvnr0c5dnzlrondgnlyxrkgs172sqsoaup1i7jw4p284s8xdk6zf6ke1eneo2mohb98d4rulxlkfdrf3px0qlootuu89usbrdme5f4o58zxgxtc4b0jrq9me016r5hhnk6zti7lqfebbdpzk7do78c8w3bep8xpfdb91y3zg7lbas8ynohiprvjqlr2mo005gaze441ly8jbk6s00maqv4sv7xs81 == \z\k\g\e\1\m\4\p\r\8\2\1\u\p\q\i\i\f\i\x\b\h\u\3\u\n\z\7\e\k\m\i\i\h\s\j\z\q\t\c\r\4\c\3\a\e\6\o\h\u\h\3\1\y\h\t\g\u\3\9\7\d\d\e\w\s\i\3\0\r\1\q\s\c\v\e\s\y\e\u\u\v\r\v\b\q\4\e\e\2\y\l\5\l\m\j\q\a\d\e\2\u\m\p\z\o\m\0\a\b\4\9\r\d\5\g\a\d\u\8\k\9\o\q\y\8\k\7\o\v\g\s\b\i\2\5\f\l\8\b\h\1\n\a\5\0\e\4\a\i\u\z\d\7\e\1\b\i\4\0\d\h\w\n\l\8\r\k\1\f\8\7\z\v\g\7\z\3\8\m\c\2\f\r\q\c\l\p\9\e\b\b\e\g\e\k\e\f\1\e\0\g\3\s\f\3\y\7\6\t\u\j\a\8\o\v\3\k\2\i\2\t\h\q\u\r\s\d\8\r\b\w\h\u\7\0\b\o\q\f\3\k\w\5\o\p\k\g\b\4\e\4\j\y\k\3\y\s\z\w\1\2\h\d\b\f\l\r\b\1\k\m\s\y\d\m\l\r\x\2\q\s\5\o\w\m\g\j\x\z\j\3\5\z\y\x\u\q\p\u\s\j\3\b\x\s\v\a\9\x\u\1\6\9\8\j\3\f\2\j\2\v\y\f\6\b\7\5\p\q\e\6\o\g\y\0\4\x\o\8\x\t\i\9\p\e\l\y\g\h\7\4\p\0\d\o\0\m\s\l\a\m\m\n\u\0\z\q\4\s\2\x\x\0\3\q\7\v\e\t\t\a\m\q\h\c\r\x\7\2\8\s\a\e\8\y\j\0\1\e\3\a\e\a\k\a\1\n\u\d\w\i\p\9\c\q\b\f\3\5\9\v\r\w\b\u\a\f\h\d\r\r\r\q\x\b\9\y\s\8\8\o\0\i\r\s\3\q\8\l\5\o\e\3\k\y\7\l\g\v\b\l\d\f\s\k\b\q\5\x\v\8\5\s\9\w\0\9\7\q\w\w\z\d\e\n\6\c\q\t\p\u\p\n\1\k\5\1\n\8\w\1\b\l\v\i\5\x\e\2\5\h\p\d\r\l\0\z\6\l\4\5\l\c\i\7\5\r\t\8\c\b\f\c\x\s\x\4\j\t\5\8\v\9\v\i\o\c\k\j\p\4\r\f\o\d\y\q\4\3\n\e\r\u\4\7\n\h\8\i\c\y\t\f\i\d\n\e\x\e\o\7\s\x\5\6\d\u\s\b\0\i\e\0\c\5\m\b\t\w\h\d\f\d\5\u\9\o\4\8\b\v\f\r\g\8\4\o\9\j\p\0\9\h\e\v\5\k\4\1\1\f\r\v\1\0\f\9\v\f\t\0\v\h\n\5\w\a\h\2\n\s\6\x\z\l\1\m\y\b\u\7\4\5\q\e\c\n\o\8\w\a\b\l\g\e\6\e\x\w\j\y\k\z\j\a\k\0\3\k\z\v\j\b\3\4\j\g\m\t\k\p\l\4\1\1\6\v\d\5\z\l\s\e\1\8\l\l\y\4\y\h\x\z\p\t\e\o\f\z\l\r\6\o\l\h\o\h\v\h\9\u\i\2\l\f\c\f\p\u\c\v\0\p\o\c\x\b\t\9\2\p\g\k\c\g\n\n\y\u\i\9\f\9\s\d\h\c\i\t\2\m\o\f\3\p\q\l\r\b\d\1\k\a\i\6\m\3\5\t\j\r\m\y\p\y\4\n\t\c\n\j\z\p\k\r\z\p\w\8\q\y\k\n\t\8\n\w\x\3\5\f\a\g\f\d\s\f\u\v\e\t\r\a\l\e\d\k\g\h\k\j\h\r\a\9\z\9\p\3\3\9\z\f\0\u\s\d\y\j\a\0\e\e\5\z\e\s\2\e\s\j\r\r\g\d\n\d\a\x\x\a\4\9\i\n\w\f\a\d\1\p\s\n\x\z\h\4\2\s\2\h\9\1\p\8\q\t\z\9\f\n\b\v\1\s\x\q\e\v\p\6\4\u\s\j\2\l\o\b\v\l\n\4\l\k\6\j\z\u\0\f\g\i\d\2\y\g\p\t\o\7\b\x\d\g\k\7\6\8\o\3\d\d\v\w\z\2\z\e\o\6\f\v\f\p\k\x\3\d\0\z\f\p\5\9\z\s\2\a\w\t\b\1\l\v\w\8\o\o\v\m\y\3\b\2\s\i\9\w\l\w\c\q\g\a\9\w\o\2\0\z\k\c\p\d\r\1\8\j\q\c\g\l\g\1\e\l\o\p\c\m\w\s\6\j\z\x\c\b\p\a\f\r\f\4\9\q\d\5\j\z\b\1\y\2\a\0\0\l\a\t\s\q\u\w\k\9\2\9\y\v\o\r\f\f\e\r\a\d\3\t\t\h\2\j\9\o\t\w\b\f\b\5\y\b\n\w\i\b\1\h\w\y\z\r\p\s\l\4\f\e\i\n\8\b\t\9\7\4\h\6\0\z\n\m\r\3\y\d\e\d\n\7\5\o\z\p\l\i\n\m\5\m\z\r\3\x\8\8\u\d\t\j\4\h\8\n\v\y\s\y\2\7\a\q\h\n\9\x\j\r\i\l\5\z\u\2\f\c\d\p\f\0\0\j\e\e\6\9\t\i\l\4\6\x\j\u\o\w\3\c\n\4\w\w\b\2\v\y\l\e\q\w\b\s\h\0\q\a\b\q\a\j\5\b\t\u\h\k\r\k\r\n\3\6\o\l\8\x\u\n\k\6\m\1\0\f\5\i\o\c\h\5\v\b\0\k\w\k\4\c\1\c\j\m\9\g\k\f\r\1\p\r\z\r\h\h\n\2\3\m\1\s\x\n\q\v\y\b\f\t\l\r\j\d\y\u\6\9\z\d\b\3\u\p\s\i\p\l\a\c\o\5\w\u\f\0\j\f\a\w\3\6\p\i\w\c\t\8\k\q\8\p\6\d\a\7\9\n\p\o\6\3\v\2\2\d\j\b\p\h\9\d\j\t\8\w\n\b\y\r\8\1\1\y\7\9\u\m\o\5\q\v\m\o\g\v\r\k\d\7\h\2\p\p\f\v\v\c\o\n\s\k\q\t\z\2\b\k\w\l\q\v\r\g\g\x\f\d\h\r\w\8\1\3\3\c\v\q\y\g\b\e\9\r\g\8\r\z\y\i\u\o\x\z\i\c\k\k\c\d\q\5\g\b\1\g\f\5\g\q\x\s\j\2\v\u\j\7\r\0\j\c\w\8\w\c\s\x\8\9\o\c\6\5\x\a\9\j\m\z\y\4\h\x\p\2\8\f\8\s\m\j\i\q\a\m\n\v\7\p\y\y\b\1\b\t\u\p\8\7\l\6\w\t\p\a\k\6\u\k\c\x\l\q\q\i\a\9\y\9\v\i\8\o\a\p\6\o\t\v\t\7\o\o\c\e\c\f\i\i\6\y\v\o\h\z\b\w\z\u\9\i\8\p\l\v\x\9\a\a\b\3\x\v\6\b\i\w\0\w\0\n\p\g\x\j\c\d\i\m\z\s\w\7\q\h\7\s\a\i\0\7\c\7\m\b\j\v\g\3\1\a\8\f\e\6\2\6\c\x\u\d\2\9\r\0\q\d\x\z\t\j\f\4\t\9\u\i\j\x\9\h\5\m\p\6\3\i\r\7\w\j\r\r\d\n\w\3\e\p\f\f\w\6\3\i\u\q\d\t\m\d\m\j\r\d\p\l\h\8\y\x\s\u\v\y\w\u\l\z\f\4\p\a\z\y\d\0\l\c\q\1\h\k\3\i\d\g\4\l\f\2\1\q\f\g\s\t\b\8\v\g\m\5\f\n\l\1\l\7\8\s\u\j\0\t\g\r\c\h\u\o\2\d\m\k\y\7\2\8\c\v\p\4\o\m\j\a\3\r\p\c\g\9\i\7\d\7\e\w\6\5\t\x\v\y\i\0\7\n\0\6\z\q\b\h\8\n\i\9\h\k\o\k\i\s\h\n\3\z\0\c\h\r\x\j\l\6\o\m\k\t\u\d\s\7\y\s\e\h\p\2\i\z\f\t\b\i\k\j\c\a\h\m\x\w\6\5\b\q\9\9\7\q\z\a\n\3\k\6\1\a\a\0\9\5\4\g\5\i\c\g\9\i\p\i\c\h\8\v\x\d\4\h\w\c\b\5\e\m\0\j\3\q\a\c\s\5\f\4\p\g\9\s\q\l\s\o\v\j\z\n\a\l\6\r\g\8\2\s\1\m\o\9\5\s\t\g\p\t\c\m\9\l\y\l\o\i\u\z\h\0\h\t\o\l\f\s\2\5\v\7\a\k\5\3\f\i\1\4\7\m\k\r\p\k\4\v\0\z\b\j\3\3\g\t\w\6\g\8\t\n\e\h\8\x\r\q\j\d\m\y\7\8\c\9\r\3\6\c\q\w\l\s\r\d\h\m\i\o\g\b\g\u\a\6\b\z\u\z\3\x\j\h\w\f\f\4\b\v\g\9\k\c\r\b\w\y\q\a\o\g\p\2\6\a\f\r\9\z\y\e\2\x\c\8\o\y\w\1\j\r\c\s\n\l\1\w\t\z\q\w\2\p\b\e\9\z\7\s\b\a\a\w\8\o\i\p\h\a\g\c\q\4\j\z\a\r\o\c\3\u\2\i\r\j\s\f\c\q\9\f\s\g\o\u\0\w\z\p\b\g\x\r\r\u\1\4\a\k\s\q\0\z\6\8\h\b\3\t\f\v\h\1\w\g\s\g\t\v\9\o\0\l\t\p\3\8\w\t\e\l\k\w\h\s\z\j\y\s\l\v\4\1\8\m\9\q\6\z\z\9\2\c\o\q\3\3\d\0\4\h\x\x\x\f\r\z\l\o\e\7\b\t\0\7\c\l\g\z\n\8\s\s\g\y\h\s\t\v\5\w\3\o\x\k\e\x\0\a\4\o\0\y\b\o\2\g\0\n\9\9\e\g\i\l\7\v\a\m\3\0\q\q\z\0\e\7\i\0\j\e\2\4\n\m\8\y\u\t\m\w\y\z\u\e\b\1\s\z\s\5\w\w\n\g\a\0\3\h\h\k\q\0\t\g\q\z\7\7\b\k\9\l\6\7\d\z\u\g\e\p\k\z\p\f\y\a\n\r\a\v\b\v\h\l\h\i\q\c\y\0\t\8\t\0\u\u\t\q\c\g\i\j\r\s\4\n\c\l\y\b\q\d\9\q\0\q\n\x\z\t\w\f\w\w\0\x\3\a\7\d\q\c\b\k\3\w\2\i\8\2\9\h\l\k\0\g\m\l\m\i\8\k\8\7\k\m\f\j\d\d\m\1\v\j\s\z\x\m\x\a\t\w\l\9\7\l\g\x\r\p\v\w\g\k\e\z\1\l\w\n\l\n\i\g\5\z\p\k\p\o\1\t\5\d\9\e\5\3\8\v\5\4\t\8\l\1\s\y\z\r\r\q\c\r\0\l\7\f\j\m\o\8\c\q\u\p\v\x\z\7\v\w\2\h\9\b\b\8\f\w\8\g\r\0\e\i\5\8\q\2\f\a\t\z\j\7\a\6\v\4\b\d\l\f\9\z\z\g\3\g\i\6\c\e\9\w\p\i\g\9\c\f\w\a\l\0\7\i\c\f\v\x\f\2\i\4\o\p\o\0\8\r\f\4\o\m\w\1\w\r\j\0\p\r\7\4\p\s\z\i\1\q\6\n\x\5\5\s\3\u\n\k\f\t\7\u\2\4\i\4\d\1\3\a\w\t\t\8\j\t\c\9\6\9\c\e\p\e\0\6\s\y\m\c\c\e\k\3\r\u\v\r\o\f\4\g\6\6\p\w\h\7\g\m\a\i\d\v\g\g\7\3\9\o\w\2\i\l\1\z\l\6\a\q\r\y\2\j\v\7\5\y\u\y\q\6\z\x\5\8\2\k\n\y\l\s\2\f\o\q\l\s\u\2\j\x\t\3\q\0\e\1\s\l\k\5\r\f\a\q\e\g\t\h\d\t\a\p\t\v\x\m\s\m\w\9\6\0\h\p\8\d\0\p\b\p\b\5\3\v\5\i\b\q\r\l\3\v\8\y\g\b\6\6\6\9\v\1\v\6\e\j\c\y\8\y\n\k\h\z\x\j\u\e\0\m\e\2\y\b\w\6\p\k\6\q\l\f\5\y\n\y\1\5\g\k\f\2\r\u\5\4\e\q\6\9\0\m\c\m\g\7\f\r\i\8\8\a\l\v\p\n\p\9\g\8\r\3\i\7\i\0\a\g\k\m\b\g\s\k\7\t\u\5\m\t\n\z\m\y\t\o\3\b\u\i\c\o\j\z\6\n\l\m\r\z\m\8\g\r\r\i\8\1\x\y\c\t\a\4\e\z\2\u\v\6\b\4\p\g\d\f\g\n\q\k\l\y\8\j\k\c\9\s\9\w\c\8\l\p\e\t\o\b\n\p\8\d\v\n\r\v\c\j\3\b\o\s\6\5\y\2\v\b\m\y\u\u\5\x\w\p\o\y\9\1\f\x\p\c\o\2\o\o\5\h\r\n\2\4\1\d\6\8\r\h\2\v\g\9\i\8\d\c\p\r\d\d\e\p\m\y\a\w\f\9\f\g\x\p\l\6\f\s\x\u\c\c\5\x\y\2\f\d\8\x\z\z\l\g\1\y\x\a\g\e\i\z\3\z\e\h\4\e\i\j\5\0\x\u\e\p\n\2\k\i\e\w\4\2\y\m\l\l\r\p\s\n\8\7\0\b\e\h\a\y\o\y\j\4\h\n\y\1\z\6\x\1\5\i\l\5\k\c\u\b\5\g\s\j\2\z\8\7\1\1\w\u\7\5\u\4\f\h\0\e\x\1\7\0\e\6\i\6\6\f\i\0\g\v\d\q\a\t\j\c\q\m\j\j\3\z\k\8\f\g\n\r\l\o\l\w\l\4\9\j\j\f\j\u\0\5\3\p\l\a\d\g\x\8\4\9\1\q\w\h\z\x\f\i\z\u\s\e\0\0\l\3\w\p\o\5\k\k\i\m\l\c\w\y\d\v\x\j\m\u\7\z\s\6\z\5\5\5\2\l\4\b\u\w\m\2\x\y\z\i\w\g\y\i\t\s\c\w\7\f\8\s\c\5\z\x\i\l\i\k\u\3\m\8\e\f\m\3\v\l\w\f\p\i\8\j\t\m\x\9\4\0\z\t\o\t\7\g\8\v\5\r\n\b\2\0\5\2\a\p\o\j\4\d\v\b\8\c\3\0\c\7\w\x\9\b\g\t\w\o\n\f\i\n\0\m\m\g\i\p\k\p\y\x\o\l\m\q\p\a\y\o\0\9\l\5\y\q\r\9\d\9\w\6\i\u\b\r\c\j\g\w\m\v\m\7\h\5\q\2\w\p\4\x\w\t\p\z\k\m\6\l\5\q\k\1\2\l\7\c\5\j\m\u\r\2\m\r\3\l\a\z\g\z\d\0\b\7\1\s\j\e\q\1\z\0\d\9\j\8\8\f\g\m\q\y\o\a\j\5\j\b\8\b\5\8\4\f\o\z\g\a\m\e\c\b\b\w\v\j\6\5\7\t\q\9\8\m\k\p\v\v\7\f\q\c\v\0\4\x\m\s\4\z\g\4\a\i\p\r\u\c\8\8\x\a\a\6\5\w\a\t\u\q\b\e\0\x\d\a\z\6\5\p\b\t\2\c\k\6\i\y\s\7\f\1\l\w\2\x\2\4\k\p\k\8\8\p\p\d\v\h\g\2\t\y\t\k\d\o\7\y\g\f\w\u\9\y\f\t\f\b\5\p\p\y\b\y\0\1\3\h\d\f\x\j\j\7\5\8\b\z\7\v\h\b\2\f\l\z\b\g\d\c\5\o\c\g\k\c\u\m\u\v\o\m\c\8\9\g\w\a\j\n\i\l\i\a\b\a\4\t\o\d\u\k\j\c\s\w\w\i\y\6\d\b\i\9\9\x\z\0\e\n\z\j\z\5\x\i\v\9\y\u\y\n\0\n\6\2\i\k\m\7\w\z\9\d\d\1\g\f\t\i\l\4\x\6\m\b\4\n\i\o\x\4\n\x\7\7\1\a\x\a\y\o\3\x\d\m\v\g\v\j\o\t\n\b\e\y\v\1\6\r\f\m\r\j\z\c\t\h\3\l\n\k\0\y\8\g\b\d\q\e\8\x\m\l\5\m\0\v\3\m\f\3\i\8\p\b\o\y\i\b\p\t\g\v\4\7\s\8\3\v\t\h\2\5\g\9\h\i\9\y\k\x\2\m\z\v\q\b\m\f\1\2\q\0\0\v\o\9\n\d\l\f\r\p\i\n\t\v\r\l\m\k\9\u\1\5\0\y\7\g\y\f\v\r\g\g\s\p\j\8\p\9\r\k\i\g\q\g\t\1\t\z\m\p\l\3\j\7\r\f\j\b\7\u\f\z\y\i\u\6\x\a\h\p\m\y\v\2\i\b\n\q\5\1\w\f\1\3\8\w\9\p\8\n\b\g\9\0\a\6\m\v\i\f\4\b\v\w\j\v\a\7\9\d\8\d\7\x\d\v\t\l\4\2\v\1\m\r\2\m\6\t\v\n\r\0\c\5\d\n\z\l\r\o\n\d\g\n\l\y\x\r\k\g\s\1\7\2\s\q\s\o\a\u\p\1\i\7\j\w\4\p\2\8\4\s\8\x\d\k\6\z\f\6\k\e\1\e\n\e\o\2\m\o\h\b\9\8\d\4\r\u\l\x\l\k\f\d\r\f\3\p\x\0\q\l\o\o\t\u\u\8\9\u\s\b\r\d\m\e\5\f\4\o\5\8\z\x\g\x\t\c\4\b\0\j\r\q\9\m\e\0\1\6\r\5\h\h\n\k\6\z\t\i\7\l\q\f\e\b\b\d\p\z\k\7\d\o\7\8\c\8\w\3\b\e\p\8\x\p\f\d\b\9\1\y\3\z\g\7\l\b\a\s\8\y\n\o\h\i\p\r\v\j\q\l\r\2\m\o\0\0\5\g\a\z\e\4\4\1\l\y\8\j\b\k\6\s\0\0\m\a\q\v\4\s\v\7\x\s\8\1 ]] 00:08:12.788 00:08:12.788 real 0m1.010s 00:08:12.788 user 0m0.685s 00:08:12.788 sys 0m0.387s 00:08:12.788 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.788 10:21:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:12.788 ************************************ 00:08:12.788 END TEST dd_rw_offset 00:08:12.788 ************************************ 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.047 10:21:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.047 { 00:08:13.047 "subsystems": [ 00:08:13.047 { 00:08:13.047 "subsystem": "bdev", 00:08:13.047 "config": [ 00:08:13.047 { 00:08:13.047 "params": { 00:08:13.047 "trtype": "pcie", 00:08:13.047 "traddr": "0000:00:10.0", 00:08:13.047 "name": "Nvme0" 00:08:13.047 }, 00:08:13.047 "method": "bdev_nvme_attach_controller" 00:08:13.047 }, 00:08:13.047 { 00:08:13.047 "method": "bdev_wait_for_examine" 00:08:13.047 } 00:08:13.047 ] 00:08:13.047 } 00:08:13.047 ] 00:08:13.047 } 00:08:13.047 [2024-12-10 10:21:48.093680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.047 [2024-12-10 10:21:48.093803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73077 ] 00:08:13.047 [2024-12-10 10:21:48.232575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.047 [2024-12-10 10:21:48.264836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.306 [2024-12-10 10:21:48.294231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.306  [2024-12-10T10:21:48.533Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.306 00:08:13.306 10:21:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.306 00:08:13.306 real 0m14.188s 00:08:13.306 user 0m10.165s 00:08:13.306 sys 0m4.505s 00:08:13.306 10:21:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.306 10:21:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.306 ************************************ 00:08:13.306 END TEST spdk_dd_basic_rw 00:08:13.306 ************************************ 00:08:13.566 10:21:48 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:13.566 10:21:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.566 10:21:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.566 10:21:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:13.566 ************************************ 00:08:13.566 START TEST spdk_dd_posix 00:08:13.566 ************************************ 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:13.566 * Looking for test storage... 00:08:13.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.566 --rc genhtml_branch_coverage=1 00:08:13.566 --rc genhtml_function_coverage=1 00:08:13.566 --rc genhtml_legend=1 00:08:13.566 --rc geninfo_all_blocks=1 00:08:13.566 --rc geninfo_unexecuted_blocks=1 00:08:13.566 00:08:13.566 ' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.566 --rc genhtml_branch_coverage=1 00:08:13.566 --rc genhtml_function_coverage=1 00:08:13.566 --rc genhtml_legend=1 00:08:13.566 --rc geninfo_all_blocks=1 00:08:13.566 --rc geninfo_unexecuted_blocks=1 00:08:13.566 00:08:13.566 ' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.566 --rc genhtml_branch_coverage=1 00:08:13.566 --rc genhtml_function_coverage=1 00:08:13.566 --rc genhtml_legend=1 00:08:13.566 --rc geninfo_all_blocks=1 00:08:13.566 --rc geninfo_unexecuted_blocks=1 00:08:13.566 00:08:13.566 ' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.566 --rc genhtml_branch_coverage=1 00:08:13.566 --rc genhtml_function_coverage=1 00:08:13.566 --rc genhtml_legend=1 00:08:13.566 --rc geninfo_all_blocks=1 00:08:13.566 --rc geninfo_unexecuted_blocks=1 00:08:13.566 00:08:13.566 ' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:13.566 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:13.567 * First test run, liburing in use 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:13.567 ************************************ 00:08:13.567 START TEST dd_flag_append 00:08:13.567 ************************************ 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=x729nwpbmu5676533i4x59x2izuyfn5h 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=vmcnb267g8hx7vwbujangvneiv3cjlo5 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s x729nwpbmu5676533i4x59x2izuyfn5h 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s vmcnb267g8hx7vwbujangvneiv3cjlo5 00:08:13.567 10:21:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:13.826 [2024-12-10 10:21:48.827042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.826 [2024-12-10 10:21:48.827153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73141 ] 00:08:13.826 [2024-12-10 10:21:48.962739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.826 [2024-12-10 10:21:48.995138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.826 [2024-12-10 10:21:49.022738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.826  [2024-12-10T10:21:49.312Z] Copying: 32/32 [B] (average 31 kBps) 00:08:14.085 00:08:14.085 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ vmcnb267g8hx7vwbujangvneiv3cjlo5x729nwpbmu5676533i4x59x2izuyfn5h == \v\m\c\n\b\2\6\7\g\8\h\x\7\v\w\b\u\j\a\n\g\v\n\e\i\v\3\c\j\l\o\5\x\7\2\9\n\w\p\b\m\u\5\6\7\6\5\3\3\i\4\x\5\9\x\2\i\z\u\y\f\n\5\h ]] 00:08:14.085 00:08:14.085 real 0m0.402s 00:08:14.085 user 0m0.202s 00:08:14.085 sys 0m0.179s 00:08:14.085 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.085 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:14.085 ************************************ 00:08:14.085 END TEST dd_flag_append 00:08:14.085 ************************************ 00:08:14.085 10:21:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:14.085 10:21:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.085 10:21:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:14.086 ************************************ 00:08:14.086 START TEST dd_flag_directory 00:08:14.086 ************************************ 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.086 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.086 [2024-12-10 10:21:49.274731] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.086 [2024-12-10 10:21:49.274831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73175 ] 00:08:14.348 [2024-12-10 10:21:49.412836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.348 [2024-12-10 10:21:49.446455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.348 [2024-12-10 10:21:49.473248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.348 [2024-12-10 10:21:49.488041] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:14.348 [2024-12-10 10:21:49.488112] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:14.348 [2024-12-10 10:21:49.488141] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.348 [2024-12-10 10:21:49.544791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.626 10:21:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:14.626 [2024-12-10 10:21:49.670427] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.626 [2024-12-10 10:21:49.670540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73179 ] 00:08:14.626 [2024-12-10 10:21:49.806488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.626 [2024-12-10 10:21:49.841665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.907 [2024-12-10 10:21:49.871915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.907 [2024-12-10 10:21:49.887240] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:14.907 [2024-12-10 10:21:49.887309] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:14.907 [2024-12-10 10:21:49.887338] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.907 [2024-12-10 10:21:49.951991] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.907 00:08:14.907 real 0m0.803s 00:08:14.907 user 0m0.400s 00:08:14.907 sys 0m0.195s 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:14.907 ************************************ 00:08:14.907 END TEST dd_flag_directory 00:08:14.907 ************************************ 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:14.907 ************************************ 00:08:14.907 START TEST dd_flag_nofollow 00:08:14.907 ************************************ 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.907 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.166 [2024-12-10 10:21:50.138682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.166 [2024-12-10 10:21:50.138787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73207 ] 00:08:15.166 [2024-12-10 10:21:50.277427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.166 [2024-12-10 10:21:50.309959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.166 [2024-12-10 10:21:50.337517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.166 [2024-12-10 10:21:50.352453] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:15.166 [2024-12-10 10:21:50.352531] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:15.166 [2024-12-10 10:21:50.352562] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.426 [2024-12-10 10:21:50.412324] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.426 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:15.426 [2024-12-10 10:21:50.519664] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.426 [2024-12-10 10:21:50.519767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73217 ] 00:08:15.426 [2024-12-10 10:21:50.643452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.684 [2024-12-10 10:21:50.677318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.684 [2024-12-10 10:21:50.706115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.684 [2024-12-10 10:21:50.720896] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:15.684 [2024-12-10 10:21:50.720961] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:15.684 [2024-12-10 10:21:50.720990] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.684 [2024-12-10 10:21:50.779517] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:15.685 10:21:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.685 [2024-12-10 10:21:50.908585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.685 [2024-12-10 10:21:50.908697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73219 ] 00:08:15.943 [2024-12-10 10:21:51.047895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.943 [2024-12-10 10:21:51.079982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.943 [2024-12-10 10:21:51.107242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.943  [2024-12-10T10:21:51.429Z] Copying: 512/512 [B] (average 500 kBps) 00:08:16.202 00:08:16.202 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ kq3cyk26qj8mf7vzfatqat13bd88dznnfltezl2t0eiutkr2xds2xnux9q6nu525woybelj9bmt2kqkjfe8u3jz5y5ez01wjmqr1iuktfv0sswplwql2rgrmjtixppej1r92jofw7oc0c5um566w3lvqenuuhke4uoh1yevdkeby9tg6eevitd0busf4gd4nldqg0d3lsy921egbw4y27ud83da5jvjx286cv9wv6urobt57y4efg31xd8fakx8nzz4wrxo4vqjfip6b2rt9hjy5pfasb8xdiopj37wdnbw8th2lvcnaec0ylgnsv8gtr7agjiiyfxnpcf75q0j94rzs3f8yxbssjldb7jb9bmuiljl3hdsdgz7f557l7u4kxyx1ret63t7cpqp17rs2nnknnbu9ziuigpnniaee7cqwt3pyxn9omcrvwzin6sy3rrfuyehdw6cpndr5gxo3gb4qe1ypinpkxt2t8yq6muasdlch15i2235gt4xwrvtg == \k\q\3\c\y\k\2\6\q\j\8\m\f\7\v\z\f\a\t\q\a\t\1\3\b\d\8\8\d\z\n\n\f\l\t\e\z\l\2\t\0\e\i\u\t\k\r\2\x\d\s\2\x\n\u\x\9\q\6\n\u\5\2\5\w\o\y\b\e\l\j\9\b\m\t\2\k\q\k\j\f\e\8\u\3\j\z\5\y\5\e\z\0\1\w\j\m\q\r\1\i\u\k\t\f\v\0\s\s\w\p\l\w\q\l\2\r\g\r\m\j\t\i\x\p\p\e\j\1\r\9\2\j\o\f\w\7\o\c\0\c\5\u\m\5\6\6\w\3\l\v\q\e\n\u\u\h\k\e\4\u\o\h\1\y\e\v\d\k\e\b\y\9\t\g\6\e\e\v\i\t\d\0\b\u\s\f\4\g\d\4\n\l\d\q\g\0\d\3\l\s\y\9\2\1\e\g\b\w\4\y\2\7\u\d\8\3\d\a\5\j\v\j\x\2\8\6\c\v\9\w\v\6\u\r\o\b\t\5\7\y\4\e\f\g\3\1\x\d\8\f\a\k\x\8\n\z\z\4\w\r\x\o\4\v\q\j\f\i\p\6\b\2\r\t\9\h\j\y\5\p\f\a\s\b\8\x\d\i\o\p\j\3\7\w\d\n\b\w\8\t\h\2\l\v\c\n\a\e\c\0\y\l\g\n\s\v\8\g\t\r\7\a\g\j\i\i\y\f\x\n\p\c\f\7\5\q\0\j\9\4\r\z\s\3\f\8\y\x\b\s\s\j\l\d\b\7\j\b\9\b\m\u\i\l\j\l\3\h\d\s\d\g\z\7\f\5\5\7\l\7\u\4\k\x\y\x\1\r\e\t\6\3\t\7\c\p\q\p\1\7\r\s\2\n\n\k\n\n\b\u\9\z\i\u\i\g\p\n\n\i\a\e\e\7\c\q\w\t\3\p\y\x\n\9\o\m\c\r\v\w\z\i\n\6\s\y\3\r\r\f\u\y\e\h\d\w\6\c\p\n\d\r\5\g\x\o\3\g\b\4\q\e\1\y\p\i\n\p\k\x\t\2\t\8\y\q\6\m\u\a\s\d\l\c\h\1\5\i\2\2\3\5\g\t\4\x\w\r\v\t\g ]] 00:08:16.202 00:08:16.202 real 0m1.180s 00:08:16.202 user 0m0.574s 00:08:16.202 sys 0m0.366s 00:08:16.202 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.202 ************************************ 00:08:16.202 END TEST dd_flag_nofollow 00:08:16.202 ************************************ 00:08:16.202 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:16.203 ************************************ 00:08:16.203 START TEST dd_flag_noatime 00:08:16.203 ************************************ 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733826111 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733826111 00:08:16.203 10:21:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:17.139 10:21:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.397 [2024-12-10 10:21:52.382091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.398 [2024-12-10 10:21:52.382204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73267 ] 00:08:17.398 [2024-12-10 10:21:52.523317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.398 [2024-12-10 10:21:52.564630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.398 [2024-12-10 10:21:52.597147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.398  [2024-12-10T10:21:52.883Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.656 00:08:17.656 10:21:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.656 10:21:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733826111 )) 00:08:17.657 10:21:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.657 10:21:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733826111 )) 00:08:17.657 10:21:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.657 [2024-12-10 10:21:52.810643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.657 [2024-12-10 10:21:52.810747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73275 ] 00:08:17.915 [2024-12-10 10:21:52.951525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.915 [2024-12-10 10:21:52.992909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.915 [2024-12-10 10:21:53.024844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.915  [2024-12-10T10:21:53.401Z] Copying: 512/512 [B] (average 500 kBps) 00:08:18.174 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733826113 )) 00:08:18.174 00:08:18.174 real 0m1.876s 00:08:18.174 user 0m0.436s 00:08:18.174 sys 0m0.382s 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.174 ************************************ 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:18.174 END TEST dd_flag_noatime 00:08:18.174 ************************************ 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.174 ************************************ 00:08:18.174 START TEST dd_flags_misc 00:08:18.174 ************************************ 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.174 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:18.174 [2024-12-10 10:21:53.299699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.174 [2024-12-10 10:21:53.299811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73309 ] 00:08:18.433 [2024-12-10 10:21:53.437383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.433 [2024-12-10 10:21:53.471086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.433 [2024-12-10 10:21:53.497844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.433  [2024-12-10T10:21:53.660Z] Copying: 512/512 [B] (average 500 kBps) 00:08:18.433 00:08:18.433 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdeik4qlpd6vqzks6msgvz0yphyuh5of3kssbwe16nyms7t0oglg4qktf5kqxwdmdhv4wa50vx06afyh9v33km902a5pzb1he4njx12tzq62aq3sc6es5o7zwn5f87gnvl80q0cwvjvtwrdnvkffwib912zlmwife9aq0wrpywkhkzvwkfzc1uwf1kw45fhofun9dhyic7lirmk7jgmgoqswpu38ce1uguni47jegomimdex1w1po8czp2qb25qen6vpiuo93ayz0dfhhvc7crtos2q684n2hsutjw7ydcil2dvcw7in7jaidlznau91my5opi4l1i5x0vfcoyqy85upuyl0kczlgg5vo2b8lipeoobq9435peeo72eq26rg05ch94jzh7c1yh3l5sfootyto7ntpid71x4240zvbixvmgv2l9reytq7s91p6nv2j4jxlrd3numctzufk8kzrxrqg2sxu0xbdnaun4vlwb0yqd817ati8n61idcm9e6f == \r\d\e\i\k\4\q\l\p\d\6\v\q\z\k\s\6\m\s\g\v\z\0\y\p\h\y\u\h\5\o\f\3\k\s\s\b\w\e\1\6\n\y\m\s\7\t\0\o\g\l\g\4\q\k\t\f\5\k\q\x\w\d\m\d\h\v\4\w\a\5\0\v\x\0\6\a\f\y\h\9\v\3\3\k\m\9\0\2\a\5\p\z\b\1\h\e\4\n\j\x\1\2\t\z\q\6\2\a\q\3\s\c\6\e\s\5\o\7\z\w\n\5\f\8\7\g\n\v\l\8\0\q\0\c\w\v\j\v\t\w\r\d\n\v\k\f\f\w\i\b\9\1\2\z\l\m\w\i\f\e\9\a\q\0\w\r\p\y\w\k\h\k\z\v\w\k\f\z\c\1\u\w\f\1\k\w\4\5\f\h\o\f\u\n\9\d\h\y\i\c\7\l\i\r\m\k\7\j\g\m\g\o\q\s\w\p\u\3\8\c\e\1\u\g\u\n\i\4\7\j\e\g\o\m\i\m\d\e\x\1\w\1\p\o\8\c\z\p\2\q\b\2\5\q\e\n\6\v\p\i\u\o\9\3\a\y\z\0\d\f\h\h\v\c\7\c\r\t\o\s\2\q\6\8\4\n\2\h\s\u\t\j\w\7\y\d\c\i\l\2\d\v\c\w\7\i\n\7\j\a\i\d\l\z\n\a\u\9\1\m\y\5\o\p\i\4\l\1\i\5\x\0\v\f\c\o\y\q\y\8\5\u\p\u\y\l\0\k\c\z\l\g\g\5\v\o\2\b\8\l\i\p\e\o\o\b\q\9\4\3\5\p\e\e\o\7\2\e\q\2\6\r\g\0\5\c\h\9\4\j\z\h\7\c\1\y\h\3\l\5\s\f\o\o\t\y\t\o\7\n\t\p\i\d\7\1\x\4\2\4\0\z\v\b\i\x\v\m\g\v\2\l\9\r\e\y\t\q\7\s\9\1\p\6\n\v\2\j\4\j\x\l\r\d\3\n\u\m\c\t\z\u\f\k\8\k\z\r\x\r\q\g\2\s\x\u\0\x\b\d\n\a\u\n\4\v\l\w\b\0\y\q\d\8\1\7\a\t\i\8\n\6\1\i\d\c\m\9\e\6\f ]] 00:08:18.433 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.433 10:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:18.693 [2024-12-10 10:21:53.689485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.693 [2024-12-10 10:21:53.689587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73313 ] 00:08:18.693 [2024-12-10 10:21:53.826778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.693 [2024-12-10 10:21:53.857861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.693 [2024-12-10 10:21:53.883728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.693  [2024-12-10T10:21:54.178Z] Copying: 512/512 [B] (average 500 kBps) 00:08:18.951 00:08:18.951 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdeik4qlpd6vqzks6msgvz0yphyuh5of3kssbwe16nyms7t0oglg4qktf5kqxwdmdhv4wa50vx06afyh9v33km902a5pzb1he4njx12tzq62aq3sc6es5o7zwn5f87gnvl80q0cwvjvtwrdnvkffwib912zlmwife9aq0wrpywkhkzvwkfzc1uwf1kw45fhofun9dhyic7lirmk7jgmgoqswpu38ce1uguni47jegomimdex1w1po8czp2qb25qen6vpiuo93ayz0dfhhvc7crtos2q684n2hsutjw7ydcil2dvcw7in7jaidlznau91my5opi4l1i5x0vfcoyqy85upuyl0kczlgg5vo2b8lipeoobq9435peeo72eq26rg05ch94jzh7c1yh3l5sfootyto7ntpid71x4240zvbixvmgv2l9reytq7s91p6nv2j4jxlrd3numctzufk8kzrxrqg2sxu0xbdnaun4vlwb0yqd817ati8n61idcm9e6f == \r\d\e\i\k\4\q\l\p\d\6\v\q\z\k\s\6\m\s\g\v\z\0\y\p\h\y\u\h\5\o\f\3\k\s\s\b\w\e\1\6\n\y\m\s\7\t\0\o\g\l\g\4\q\k\t\f\5\k\q\x\w\d\m\d\h\v\4\w\a\5\0\v\x\0\6\a\f\y\h\9\v\3\3\k\m\9\0\2\a\5\p\z\b\1\h\e\4\n\j\x\1\2\t\z\q\6\2\a\q\3\s\c\6\e\s\5\o\7\z\w\n\5\f\8\7\g\n\v\l\8\0\q\0\c\w\v\j\v\t\w\r\d\n\v\k\f\f\w\i\b\9\1\2\z\l\m\w\i\f\e\9\a\q\0\w\r\p\y\w\k\h\k\z\v\w\k\f\z\c\1\u\w\f\1\k\w\4\5\f\h\o\f\u\n\9\d\h\y\i\c\7\l\i\r\m\k\7\j\g\m\g\o\q\s\w\p\u\3\8\c\e\1\u\g\u\n\i\4\7\j\e\g\o\m\i\m\d\e\x\1\w\1\p\o\8\c\z\p\2\q\b\2\5\q\e\n\6\v\p\i\u\o\9\3\a\y\z\0\d\f\h\h\v\c\7\c\r\t\o\s\2\q\6\8\4\n\2\h\s\u\t\j\w\7\y\d\c\i\l\2\d\v\c\w\7\i\n\7\j\a\i\d\l\z\n\a\u\9\1\m\y\5\o\p\i\4\l\1\i\5\x\0\v\f\c\o\y\q\y\8\5\u\p\u\y\l\0\k\c\z\l\g\g\5\v\o\2\b\8\l\i\p\e\o\o\b\q\9\4\3\5\p\e\e\o\7\2\e\q\2\6\r\g\0\5\c\h\9\4\j\z\h\7\c\1\y\h\3\l\5\s\f\o\o\t\y\t\o\7\n\t\p\i\d\7\1\x\4\2\4\0\z\v\b\i\x\v\m\g\v\2\l\9\r\e\y\t\q\7\s\9\1\p\6\n\v\2\j\4\j\x\l\r\d\3\n\u\m\c\t\z\u\f\k\8\k\z\r\x\r\q\g\2\s\x\u\0\x\b\d\n\a\u\n\4\v\l\w\b\0\y\q\d\8\1\7\a\t\i\8\n\6\1\i\d\c\m\9\e\6\f ]] 00:08:18.951 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.951 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:18.951 [2024-12-10 10:21:54.068868] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.951 [2024-12-10 10:21:54.068968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73321 ] 00:08:19.210 [2024-12-10 10:21:54.200523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.210 [2024-12-10 10:21:54.232167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.210 [2024-12-10 10:21:54.258074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.210  [2024-12-10T10:21:54.437Z] Copying: 512/512 [B] (average 250 kBps) 00:08:19.210 00:08:19.210 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdeik4qlpd6vqzks6msgvz0yphyuh5of3kssbwe16nyms7t0oglg4qktf5kqxwdmdhv4wa50vx06afyh9v33km902a5pzb1he4njx12tzq62aq3sc6es5o7zwn5f87gnvl80q0cwvjvtwrdnvkffwib912zlmwife9aq0wrpywkhkzvwkfzc1uwf1kw45fhofun9dhyic7lirmk7jgmgoqswpu38ce1uguni47jegomimdex1w1po8czp2qb25qen6vpiuo93ayz0dfhhvc7crtos2q684n2hsutjw7ydcil2dvcw7in7jaidlznau91my5opi4l1i5x0vfcoyqy85upuyl0kczlgg5vo2b8lipeoobq9435peeo72eq26rg05ch94jzh7c1yh3l5sfootyto7ntpid71x4240zvbixvmgv2l9reytq7s91p6nv2j4jxlrd3numctzufk8kzrxrqg2sxu0xbdnaun4vlwb0yqd817ati8n61idcm9e6f == \r\d\e\i\k\4\q\l\p\d\6\v\q\z\k\s\6\m\s\g\v\z\0\y\p\h\y\u\h\5\o\f\3\k\s\s\b\w\e\1\6\n\y\m\s\7\t\0\o\g\l\g\4\q\k\t\f\5\k\q\x\w\d\m\d\h\v\4\w\a\5\0\v\x\0\6\a\f\y\h\9\v\3\3\k\m\9\0\2\a\5\p\z\b\1\h\e\4\n\j\x\1\2\t\z\q\6\2\a\q\3\s\c\6\e\s\5\o\7\z\w\n\5\f\8\7\g\n\v\l\8\0\q\0\c\w\v\j\v\t\w\r\d\n\v\k\f\f\w\i\b\9\1\2\z\l\m\w\i\f\e\9\a\q\0\w\r\p\y\w\k\h\k\z\v\w\k\f\z\c\1\u\w\f\1\k\w\4\5\f\h\o\f\u\n\9\d\h\y\i\c\7\l\i\r\m\k\7\j\g\m\g\o\q\s\w\p\u\3\8\c\e\1\u\g\u\n\i\4\7\j\e\g\o\m\i\m\d\e\x\1\w\1\p\o\8\c\z\p\2\q\b\2\5\q\e\n\6\v\p\i\u\o\9\3\a\y\z\0\d\f\h\h\v\c\7\c\r\t\o\s\2\q\6\8\4\n\2\h\s\u\t\j\w\7\y\d\c\i\l\2\d\v\c\w\7\i\n\7\j\a\i\d\l\z\n\a\u\9\1\m\y\5\o\p\i\4\l\1\i\5\x\0\v\f\c\o\y\q\y\8\5\u\p\u\y\l\0\k\c\z\l\g\g\5\v\o\2\b\8\l\i\p\e\o\o\b\q\9\4\3\5\p\e\e\o\7\2\e\q\2\6\r\g\0\5\c\h\9\4\j\z\h\7\c\1\y\h\3\l\5\s\f\o\o\t\y\t\o\7\n\t\p\i\d\7\1\x\4\2\4\0\z\v\b\i\x\v\m\g\v\2\l\9\r\e\y\t\q\7\s\9\1\p\6\n\v\2\j\4\j\x\l\r\d\3\n\u\m\c\t\z\u\f\k\8\k\z\r\x\r\q\g\2\s\x\u\0\x\b\d\n\a\u\n\4\v\l\w\b\0\y\q\d\8\1\7\a\t\i\8\n\6\1\i\d\c\m\9\e\6\f ]] 00:08:19.210 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.210 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:19.470 [2024-12-10 10:21:54.450445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.470 [2024-12-10 10:21:54.450548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73332 ] 00:08:19.470 [2024-12-10 10:21:54.587526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.470 [2024-12-10 10:21:54.618083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.470 [2024-12-10 10:21:54.643643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.470  [2024-12-10T10:21:54.956Z] Copying: 512/512 [B] (average 250 kBps) 00:08:19.729 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rdeik4qlpd6vqzks6msgvz0yphyuh5of3kssbwe16nyms7t0oglg4qktf5kqxwdmdhv4wa50vx06afyh9v33km902a5pzb1he4njx12tzq62aq3sc6es5o7zwn5f87gnvl80q0cwvjvtwrdnvkffwib912zlmwife9aq0wrpywkhkzvwkfzc1uwf1kw45fhofun9dhyic7lirmk7jgmgoqswpu38ce1uguni47jegomimdex1w1po8czp2qb25qen6vpiuo93ayz0dfhhvc7crtos2q684n2hsutjw7ydcil2dvcw7in7jaidlznau91my5opi4l1i5x0vfcoyqy85upuyl0kczlgg5vo2b8lipeoobq9435peeo72eq26rg05ch94jzh7c1yh3l5sfootyto7ntpid71x4240zvbixvmgv2l9reytq7s91p6nv2j4jxlrd3numctzufk8kzrxrqg2sxu0xbdnaun4vlwb0yqd817ati8n61idcm9e6f == \r\d\e\i\k\4\q\l\p\d\6\v\q\z\k\s\6\m\s\g\v\z\0\y\p\h\y\u\h\5\o\f\3\k\s\s\b\w\e\1\6\n\y\m\s\7\t\0\o\g\l\g\4\q\k\t\f\5\k\q\x\w\d\m\d\h\v\4\w\a\5\0\v\x\0\6\a\f\y\h\9\v\3\3\k\m\9\0\2\a\5\p\z\b\1\h\e\4\n\j\x\1\2\t\z\q\6\2\a\q\3\s\c\6\e\s\5\o\7\z\w\n\5\f\8\7\g\n\v\l\8\0\q\0\c\w\v\j\v\t\w\r\d\n\v\k\f\f\w\i\b\9\1\2\z\l\m\w\i\f\e\9\a\q\0\w\r\p\y\w\k\h\k\z\v\w\k\f\z\c\1\u\w\f\1\k\w\4\5\f\h\o\f\u\n\9\d\h\y\i\c\7\l\i\r\m\k\7\j\g\m\g\o\q\s\w\p\u\3\8\c\e\1\u\g\u\n\i\4\7\j\e\g\o\m\i\m\d\e\x\1\w\1\p\o\8\c\z\p\2\q\b\2\5\q\e\n\6\v\p\i\u\o\9\3\a\y\z\0\d\f\h\h\v\c\7\c\r\t\o\s\2\q\6\8\4\n\2\h\s\u\t\j\w\7\y\d\c\i\l\2\d\v\c\w\7\i\n\7\j\a\i\d\l\z\n\a\u\9\1\m\y\5\o\p\i\4\l\1\i\5\x\0\v\f\c\o\y\q\y\8\5\u\p\u\y\l\0\k\c\z\l\g\g\5\v\o\2\b\8\l\i\p\e\o\o\b\q\9\4\3\5\p\e\e\o\7\2\e\q\2\6\r\g\0\5\c\h\9\4\j\z\h\7\c\1\y\h\3\l\5\s\f\o\o\t\y\t\o\7\n\t\p\i\d\7\1\x\4\2\4\0\z\v\b\i\x\v\m\g\v\2\l\9\r\e\y\t\q\7\s\9\1\p\6\n\v\2\j\4\j\x\l\r\d\3\n\u\m\c\t\z\u\f\k\8\k\z\r\x\r\q\g\2\s\x\u\0\x\b\d\n\a\u\n\4\v\l\w\b\0\y\q\d\8\1\7\a\t\i\8\n\6\1\i\d\c\m\9\e\6\f ]] 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.729 10:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:19.729 [2024-12-10 10:21:54.845326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.729 [2024-12-10 10:21:54.845445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73336 ] 00:08:19.988 [2024-12-10 10:21:54.974742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.988 [2024-12-10 10:21:55.005534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.988 [2024-12-10 10:21:55.031159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.988  [2024-12-10T10:21:55.215Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.988 00:08:19.988 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pztla6pl077yhgfi4p5po77ksnndyzoqfd4vpkme2ob21un91gv3vit6hjxsk3ra4djnp32kwkl7peucbffsxrm8z33gqhmo73ink0htg4chf8aj1yljsx2dio481xz5kzks8p3hzn8tf08xton7w9z96s6mr6itrzsfz0azdvn4s7nhs6sjwmui099w26usszxlpiqf2awvxsmie64z6tcqn6f7ihgcp924pyc7c3smcvx8uf0nx5mfyp5a4a4yyl1ef8qn3b4qfg3g8qdm71wwly69yaw47roavrj7t2n8cx1zpl76pmp6lygu528wiljs3xlgw0xi8nhypjiftkjwzxndb9cek6un4wi7ezqnmnrthj8k53305g8n5dux82wzbvyk1pe5evlj2q5pgnta2a5eiylju4y9s6ag5an79o1u0iy7nffbft3se12ypn2l08sung7l2g4xd9rnil2dz2euqb0uatmkzaqlozhsgikdtb1qndmk8hrg449g == \p\z\t\l\a\6\p\l\0\7\7\y\h\g\f\i\4\p\5\p\o\7\7\k\s\n\n\d\y\z\o\q\f\d\4\v\p\k\m\e\2\o\b\2\1\u\n\9\1\g\v\3\v\i\t\6\h\j\x\s\k\3\r\a\4\d\j\n\p\3\2\k\w\k\l\7\p\e\u\c\b\f\f\s\x\r\m\8\z\3\3\g\q\h\m\o\7\3\i\n\k\0\h\t\g\4\c\h\f\8\a\j\1\y\l\j\s\x\2\d\i\o\4\8\1\x\z\5\k\z\k\s\8\p\3\h\z\n\8\t\f\0\8\x\t\o\n\7\w\9\z\9\6\s\6\m\r\6\i\t\r\z\s\f\z\0\a\z\d\v\n\4\s\7\n\h\s\6\s\j\w\m\u\i\0\9\9\w\2\6\u\s\s\z\x\l\p\i\q\f\2\a\w\v\x\s\m\i\e\6\4\z\6\t\c\q\n\6\f\7\i\h\g\c\p\9\2\4\p\y\c\7\c\3\s\m\c\v\x\8\u\f\0\n\x\5\m\f\y\p\5\a\4\a\4\y\y\l\1\e\f\8\q\n\3\b\4\q\f\g\3\g\8\q\d\m\7\1\w\w\l\y\6\9\y\a\w\4\7\r\o\a\v\r\j\7\t\2\n\8\c\x\1\z\p\l\7\6\p\m\p\6\l\y\g\u\5\2\8\w\i\l\j\s\3\x\l\g\w\0\x\i\8\n\h\y\p\j\i\f\t\k\j\w\z\x\n\d\b\9\c\e\k\6\u\n\4\w\i\7\e\z\q\n\m\n\r\t\h\j\8\k\5\3\3\0\5\g\8\n\5\d\u\x\8\2\w\z\b\v\y\k\1\p\e\5\e\v\l\j\2\q\5\p\g\n\t\a\2\a\5\e\i\y\l\j\u\4\y\9\s\6\a\g\5\a\n\7\9\o\1\u\0\i\y\7\n\f\f\b\f\t\3\s\e\1\2\y\p\n\2\l\0\8\s\u\n\g\7\l\2\g\4\x\d\9\r\n\i\l\2\d\z\2\e\u\q\b\0\u\a\t\m\k\z\a\q\l\o\z\h\s\g\i\k\d\t\b\1\q\n\d\m\k\8\h\r\g\4\4\9\g ]] 00:08:19.988 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.988 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:20.246 [2024-12-10 10:21:55.222524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:20.246 [2024-12-10 10:21:55.222615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73345 ] 00:08:20.246 [2024-12-10 10:21:55.360976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.246 [2024-12-10 10:21:55.392578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.246 [2024-12-10 10:21:55.423978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.246  [2024-12-10T10:21:55.732Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.505 00:08:20.505 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pztla6pl077yhgfi4p5po77ksnndyzoqfd4vpkme2ob21un91gv3vit6hjxsk3ra4djnp32kwkl7peucbffsxrm8z33gqhmo73ink0htg4chf8aj1yljsx2dio481xz5kzks8p3hzn8tf08xton7w9z96s6mr6itrzsfz0azdvn4s7nhs6sjwmui099w26usszxlpiqf2awvxsmie64z6tcqn6f7ihgcp924pyc7c3smcvx8uf0nx5mfyp5a4a4yyl1ef8qn3b4qfg3g8qdm71wwly69yaw47roavrj7t2n8cx1zpl76pmp6lygu528wiljs3xlgw0xi8nhypjiftkjwzxndb9cek6un4wi7ezqnmnrthj8k53305g8n5dux82wzbvyk1pe5evlj2q5pgnta2a5eiylju4y9s6ag5an79o1u0iy7nffbft3se12ypn2l08sung7l2g4xd9rnil2dz2euqb0uatmkzaqlozhsgikdtb1qndmk8hrg449g == \p\z\t\l\a\6\p\l\0\7\7\y\h\g\f\i\4\p\5\p\o\7\7\k\s\n\n\d\y\z\o\q\f\d\4\v\p\k\m\e\2\o\b\2\1\u\n\9\1\g\v\3\v\i\t\6\h\j\x\s\k\3\r\a\4\d\j\n\p\3\2\k\w\k\l\7\p\e\u\c\b\f\f\s\x\r\m\8\z\3\3\g\q\h\m\o\7\3\i\n\k\0\h\t\g\4\c\h\f\8\a\j\1\y\l\j\s\x\2\d\i\o\4\8\1\x\z\5\k\z\k\s\8\p\3\h\z\n\8\t\f\0\8\x\t\o\n\7\w\9\z\9\6\s\6\m\r\6\i\t\r\z\s\f\z\0\a\z\d\v\n\4\s\7\n\h\s\6\s\j\w\m\u\i\0\9\9\w\2\6\u\s\s\z\x\l\p\i\q\f\2\a\w\v\x\s\m\i\e\6\4\z\6\t\c\q\n\6\f\7\i\h\g\c\p\9\2\4\p\y\c\7\c\3\s\m\c\v\x\8\u\f\0\n\x\5\m\f\y\p\5\a\4\a\4\y\y\l\1\e\f\8\q\n\3\b\4\q\f\g\3\g\8\q\d\m\7\1\w\w\l\y\6\9\y\a\w\4\7\r\o\a\v\r\j\7\t\2\n\8\c\x\1\z\p\l\7\6\p\m\p\6\l\y\g\u\5\2\8\w\i\l\j\s\3\x\l\g\w\0\x\i\8\n\h\y\p\j\i\f\t\k\j\w\z\x\n\d\b\9\c\e\k\6\u\n\4\w\i\7\e\z\q\n\m\n\r\t\h\j\8\k\5\3\3\0\5\g\8\n\5\d\u\x\8\2\w\z\b\v\y\k\1\p\e\5\e\v\l\j\2\q\5\p\g\n\t\a\2\a\5\e\i\y\l\j\u\4\y\9\s\6\a\g\5\a\n\7\9\o\1\u\0\i\y\7\n\f\f\b\f\t\3\s\e\1\2\y\p\n\2\l\0\8\s\u\n\g\7\l\2\g\4\x\d\9\r\n\i\l\2\d\z\2\e\u\q\b\0\u\a\t\m\k\z\a\q\l\o\z\h\s\g\i\k\d\t\b\1\q\n\d\m\k\8\h\r\g\4\4\9\g ]] 00:08:20.505 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.505 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:20.505 [2024-12-10 10:21:55.624522] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:20.505 [2024-12-10 10:21:55.624645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73355 ] 00:08:20.765 [2024-12-10 10:21:55.761385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.765 [2024-12-10 10:21:55.792621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.765 [2024-12-10 10:21:55.818335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.765  [2024-12-10T10:21:55.992Z] Copying: 512/512 [B] (average 250 kBps) 00:08:20.765 00:08:20.765 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pztla6pl077yhgfi4p5po77ksnndyzoqfd4vpkme2ob21un91gv3vit6hjxsk3ra4djnp32kwkl7peucbffsxrm8z33gqhmo73ink0htg4chf8aj1yljsx2dio481xz5kzks8p3hzn8tf08xton7w9z96s6mr6itrzsfz0azdvn4s7nhs6sjwmui099w26usszxlpiqf2awvxsmie64z6tcqn6f7ihgcp924pyc7c3smcvx8uf0nx5mfyp5a4a4yyl1ef8qn3b4qfg3g8qdm71wwly69yaw47roavrj7t2n8cx1zpl76pmp6lygu528wiljs3xlgw0xi8nhypjiftkjwzxndb9cek6un4wi7ezqnmnrthj8k53305g8n5dux82wzbvyk1pe5evlj2q5pgnta2a5eiylju4y9s6ag5an79o1u0iy7nffbft3se12ypn2l08sung7l2g4xd9rnil2dz2euqb0uatmkzaqlozhsgikdtb1qndmk8hrg449g == \p\z\t\l\a\6\p\l\0\7\7\y\h\g\f\i\4\p\5\p\o\7\7\k\s\n\n\d\y\z\o\q\f\d\4\v\p\k\m\e\2\o\b\2\1\u\n\9\1\g\v\3\v\i\t\6\h\j\x\s\k\3\r\a\4\d\j\n\p\3\2\k\w\k\l\7\p\e\u\c\b\f\f\s\x\r\m\8\z\3\3\g\q\h\m\o\7\3\i\n\k\0\h\t\g\4\c\h\f\8\a\j\1\y\l\j\s\x\2\d\i\o\4\8\1\x\z\5\k\z\k\s\8\p\3\h\z\n\8\t\f\0\8\x\t\o\n\7\w\9\z\9\6\s\6\m\r\6\i\t\r\z\s\f\z\0\a\z\d\v\n\4\s\7\n\h\s\6\s\j\w\m\u\i\0\9\9\w\2\6\u\s\s\z\x\l\p\i\q\f\2\a\w\v\x\s\m\i\e\6\4\z\6\t\c\q\n\6\f\7\i\h\g\c\p\9\2\4\p\y\c\7\c\3\s\m\c\v\x\8\u\f\0\n\x\5\m\f\y\p\5\a\4\a\4\y\y\l\1\e\f\8\q\n\3\b\4\q\f\g\3\g\8\q\d\m\7\1\w\w\l\y\6\9\y\a\w\4\7\r\o\a\v\r\j\7\t\2\n\8\c\x\1\z\p\l\7\6\p\m\p\6\l\y\g\u\5\2\8\w\i\l\j\s\3\x\l\g\w\0\x\i\8\n\h\y\p\j\i\f\t\k\j\w\z\x\n\d\b\9\c\e\k\6\u\n\4\w\i\7\e\z\q\n\m\n\r\t\h\j\8\k\5\3\3\0\5\g\8\n\5\d\u\x\8\2\w\z\b\v\y\k\1\p\e\5\e\v\l\j\2\q\5\p\g\n\t\a\2\a\5\e\i\y\l\j\u\4\y\9\s\6\a\g\5\a\n\7\9\o\1\u\0\i\y\7\n\f\f\b\f\t\3\s\e\1\2\y\p\n\2\l\0\8\s\u\n\g\7\l\2\g\4\x\d\9\r\n\i\l\2\d\z\2\e\u\q\b\0\u\a\t\m\k\z\a\q\l\o\z\h\s\g\i\k\d\t\b\1\q\n\d\m\k\8\h\r\g\4\4\9\g ]] 00:08:20.765 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.765 10:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:21.027 [2024-12-10 10:21:56.009954] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.027 [2024-12-10 10:21:56.010054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73359 ] 00:08:21.027 [2024-12-10 10:21:56.146214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.027 [2024-12-10 10:21:56.186638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.027 [2024-12-10 10:21:56.216831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.027  [2024-12-10T10:21:56.514Z] Copying: 512/512 [B] (average 250 kBps) 00:08:21.287 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pztla6pl077yhgfi4p5po77ksnndyzoqfd4vpkme2ob21un91gv3vit6hjxsk3ra4djnp32kwkl7peucbffsxrm8z33gqhmo73ink0htg4chf8aj1yljsx2dio481xz5kzks8p3hzn8tf08xton7w9z96s6mr6itrzsfz0azdvn4s7nhs6sjwmui099w26usszxlpiqf2awvxsmie64z6tcqn6f7ihgcp924pyc7c3smcvx8uf0nx5mfyp5a4a4yyl1ef8qn3b4qfg3g8qdm71wwly69yaw47roavrj7t2n8cx1zpl76pmp6lygu528wiljs3xlgw0xi8nhypjiftkjwzxndb9cek6un4wi7ezqnmnrthj8k53305g8n5dux82wzbvyk1pe5evlj2q5pgnta2a5eiylju4y9s6ag5an79o1u0iy7nffbft3se12ypn2l08sung7l2g4xd9rnil2dz2euqb0uatmkzaqlozhsgikdtb1qndmk8hrg449g == \p\z\t\l\a\6\p\l\0\7\7\y\h\g\f\i\4\p\5\p\o\7\7\k\s\n\n\d\y\z\o\q\f\d\4\v\p\k\m\e\2\o\b\2\1\u\n\9\1\g\v\3\v\i\t\6\h\j\x\s\k\3\r\a\4\d\j\n\p\3\2\k\w\k\l\7\p\e\u\c\b\f\f\s\x\r\m\8\z\3\3\g\q\h\m\o\7\3\i\n\k\0\h\t\g\4\c\h\f\8\a\j\1\y\l\j\s\x\2\d\i\o\4\8\1\x\z\5\k\z\k\s\8\p\3\h\z\n\8\t\f\0\8\x\t\o\n\7\w\9\z\9\6\s\6\m\r\6\i\t\r\z\s\f\z\0\a\z\d\v\n\4\s\7\n\h\s\6\s\j\w\m\u\i\0\9\9\w\2\6\u\s\s\z\x\l\p\i\q\f\2\a\w\v\x\s\m\i\e\6\4\z\6\t\c\q\n\6\f\7\i\h\g\c\p\9\2\4\p\y\c\7\c\3\s\m\c\v\x\8\u\f\0\n\x\5\m\f\y\p\5\a\4\a\4\y\y\l\1\e\f\8\q\n\3\b\4\q\f\g\3\g\8\q\d\m\7\1\w\w\l\y\6\9\y\a\w\4\7\r\o\a\v\r\j\7\t\2\n\8\c\x\1\z\p\l\7\6\p\m\p\6\l\y\g\u\5\2\8\w\i\l\j\s\3\x\l\g\w\0\x\i\8\n\h\y\p\j\i\f\t\k\j\w\z\x\n\d\b\9\c\e\k\6\u\n\4\w\i\7\e\z\q\n\m\n\r\t\h\j\8\k\5\3\3\0\5\g\8\n\5\d\u\x\8\2\w\z\b\v\y\k\1\p\e\5\e\v\l\j\2\q\5\p\g\n\t\a\2\a\5\e\i\y\l\j\u\4\y\9\s\6\a\g\5\a\n\7\9\o\1\u\0\i\y\7\n\f\f\b\f\t\3\s\e\1\2\y\p\n\2\l\0\8\s\u\n\g\7\l\2\g\4\x\d\9\r\n\i\l\2\d\z\2\e\u\q\b\0\u\a\t\m\k\z\a\q\l\o\z\h\s\g\i\k\d\t\b\1\q\n\d\m\k\8\h\r\g\4\4\9\g ]] 00:08:21.287 00:08:21.287 real 0m3.130s 00:08:21.287 user 0m1.550s 00:08:21.287 sys 0m1.297s 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:21.287 ************************************ 00:08:21.287 END TEST dd_flags_misc 00:08:21.287 ************************************ 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:21.287 * Second test run, disabling liburing, forcing AIO 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:21.287 ************************************ 00:08:21.287 START TEST dd_flag_append_forced_aio 00:08:21.287 ************************************ 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=pa4efu4hii1hzkbk254dn8mdexvk6545 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=plc8lqtbiu36ft5kbradhqu5njvb4bqw 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s pa4efu4hii1hzkbk254dn8mdexvk6545 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s plc8lqtbiu36ft5kbradhqu5njvb4bqw 00:08:21.287 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:21.287 [2024-12-10 10:21:56.480432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.287 [2024-12-10 10:21:56.480535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73393 ] 00:08:21.546 [2024-12-10 10:21:56.619908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.546 [2024-12-10 10:21:56.650653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.546 [2024-12-10 10:21:56.677178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.546  [2024-12-10T10:21:57.032Z] Copying: 32/32 [B] (average 31 kBps) 00:08:21.805 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ plc8lqtbiu36ft5kbradhqu5njvb4bqwpa4efu4hii1hzkbk254dn8mdexvk6545 == \p\l\c\8\l\q\t\b\i\u\3\6\f\t\5\k\b\r\a\d\h\q\u\5\n\j\v\b\4\b\q\w\p\a\4\e\f\u\4\h\i\i\1\h\z\k\b\k\2\5\4\d\n\8\m\d\e\x\v\k\6\5\4\5 ]] 00:08:21.805 00:08:21.805 real 0m0.431s 00:08:21.805 user 0m0.210s 00:08:21.805 sys 0m0.100s 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.805 ************************************ 00:08:21.805 END TEST dd_flag_append_forced_aio 00:08:21.805 ************************************ 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:21.805 ************************************ 00:08:21.805 START TEST dd_flag_directory_forced_aio 00:08:21.805 ************************************ 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.805 10:21:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.805 [2024-12-10 10:21:56.960498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.805 [2024-12-10 10:21:56.960606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73414 ] 00:08:22.065 [2024-12-10 10:21:57.100097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.065 [2024-12-10 10:21:57.130851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.065 [2024-12-10 10:21:57.158513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.065 [2024-12-10 10:21:57.174458] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:22.065 [2024-12-10 10:21:57.174528] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:22.065 [2024-12-10 10:21:57.174557] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.065 [2024-12-10 10:21:57.231260] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.324 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:22.324 [2024-12-10 10:21:57.357935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.324 [2024-12-10 10:21:57.358026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73429 ] 00:08:22.324 [2024-12-10 10:21:57.496470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.324 [2024-12-10 10:21:57.530113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.583 [2024-12-10 10:21:57.558242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.583 [2024-12-10 10:21:57.573156] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:22.583 [2024-12-10 10:21:57.573220] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:22.583 [2024-12-10 10:21:57.573249] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.583 [2024-12-10 10:21:57.629038] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.583 00:08:22.583 real 0m0.792s 00:08:22.583 user 0m0.399s 00:08:22.583 sys 0m0.186s 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.583 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.583 ************************************ 00:08:22.583 END TEST dd_flag_directory_forced_aio 00:08:22.583 ************************************ 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:22.584 ************************************ 00:08:22.584 START TEST dd_flag_nofollow_forced_aio 00:08:22.584 ************************************ 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.584 10:21:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.843 [2024-12-10 10:21:57.811093] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.843 [2024-12-10 10:21:57.811189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73452 ] 00:08:22.843 [2024-12-10 10:21:57.949317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.843 [2024-12-10 10:21:57.981181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.843 [2024-12-10 10:21:58.007380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.843 [2024-12-10 10:21:58.021514] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:22.843 [2024-12-10 10:21:58.021573] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:22.843 [2024-12-10 10:21:58.021585] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.102 [2024-12-10 10:21:58.077709] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:23.102 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.103 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:23.103 [2024-12-10 10:21:58.195934] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.103 [2024-12-10 10:21:58.196025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73456 ] 00:08:23.362 [2024-12-10 10:21:58.337135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.362 [2024-12-10 10:21:58.373130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.362 [2024-12-10 10:21:58.400829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.362 [2024-12-10 10:21:58.417262] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:23.362 [2024-12-10 10:21:58.417313] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:23.362 [2024-12-10 10:21:58.417327] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.362 [2024-12-10 10:21:58.474455] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:23.362 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.621 [2024-12-10 10:21:58.601218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.621 [2024-12-10 10:21:58.601312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73469 ] 00:08:23.621 [2024-12-10 10:21:58.739080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.621 [2024-12-10 10:21:58.770950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.621 [2024-12-10 10:21:58.796964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.621  [2024-12-10T10:21:59.108Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.881 00:08:23.881 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 8ptly7t64ijizge7j617fwd3zjtiov3oafl7rcjy9owk8qs2tsveq91dky3h25gdfpeyld7vjdcmixrg75ur7cq98f3ma5lx0x98j9ylmikzz3ycmwxq3ntzqii017wwe5ww86xn03hk91g462rxvqw2mi53yvooe9k6tbbek675iwhnlb4drfn69br2l5u6k2ay7qs5ur5cur91nx2nn3ugh43eiq10pldgrkgydnrkns5j2ms4ui58p4kanlns7vocon9if673abfm1i7vlshcrxawjyj40p442wieiuygcxatqmt7mgqslbgi6wv0jhnq4c5tnl0xfe1zvlbse7ty8prytkvmyztcl6vv4e96soxsdree5agjhoy70ynodb7jax4emtyagim5xa8udkfgl01c3lgb0sxmvj1o6fkbhw01o9et4fcrkxw0w3kxmz6oi1flc6ho1xzs62hfn12wlnyo9igxkotd5cuh98q62s3f2pcxx75psu4vyr44 == \8\p\t\l\y\7\t\6\4\i\j\i\z\g\e\7\j\6\1\7\f\w\d\3\z\j\t\i\o\v\3\o\a\f\l\7\r\c\j\y\9\o\w\k\8\q\s\2\t\s\v\e\q\9\1\d\k\y\3\h\2\5\g\d\f\p\e\y\l\d\7\v\j\d\c\m\i\x\r\g\7\5\u\r\7\c\q\9\8\f\3\m\a\5\l\x\0\x\9\8\j\9\y\l\m\i\k\z\z\3\y\c\m\w\x\q\3\n\t\z\q\i\i\0\1\7\w\w\e\5\w\w\8\6\x\n\0\3\h\k\9\1\g\4\6\2\r\x\v\q\w\2\m\i\5\3\y\v\o\o\e\9\k\6\t\b\b\e\k\6\7\5\i\w\h\n\l\b\4\d\r\f\n\6\9\b\r\2\l\5\u\6\k\2\a\y\7\q\s\5\u\r\5\c\u\r\9\1\n\x\2\n\n\3\u\g\h\4\3\e\i\q\1\0\p\l\d\g\r\k\g\y\d\n\r\k\n\s\5\j\2\m\s\4\u\i\5\8\p\4\k\a\n\l\n\s\7\v\o\c\o\n\9\i\f\6\7\3\a\b\f\m\1\i\7\v\l\s\h\c\r\x\a\w\j\y\j\4\0\p\4\4\2\w\i\e\i\u\y\g\c\x\a\t\q\m\t\7\m\g\q\s\l\b\g\i\6\w\v\0\j\h\n\q\4\c\5\t\n\l\0\x\f\e\1\z\v\l\b\s\e\7\t\y\8\p\r\y\t\k\v\m\y\z\t\c\l\6\v\v\4\e\9\6\s\o\x\s\d\r\e\e\5\a\g\j\h\o\y\7\0\y\n\o\d\b\7\j\a\x\4\e\m\t\y\a\g\i\m\5\x\a\8\u\d\k\f\g\l\0\1\c\3\l\g\b\0\s\x\m\v\j\1\o\6\f\k\b\h\w\0\1\o\9\e\t\4\f\c\r\k\x\w\0\w\3\k\x\m\z\6\o\i\1\f\l\c\6\h\o\1\x\z\s\6\2\h\f\n\1\2\w\l\n\y\o\9\i\g\x\k\o\t\d\5\c\u\h\9\8\q\6\2\s\3\f\2\p\c\x\x\7\5\p\s\u\4\v\y\r\4\4 ]] 00:08:23.881 00:08:23.881 real 0m1.216s 00:08:23.881 user 0m0.609s 00:08:23.881 sys 0m0.282s 00:08:23.881 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.881 ************************************ 00:08:23.881 END TEST dd_flag_nofollow_forced_aio 00:08:23.881 10:21:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:23.881 ************************************ 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:23.881 ************************************ 00:08:23.881 START TEST dd_flag_noatime_forced_aio 00:08:23.881 ************************************ 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733826118 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733826118 00:08:23.881 10:21:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:24.818 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.077 [2024-12-10 10:22:00.094584] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.077 [2024-12-10 10:22:00.094692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73504 ] 00:08:25.077 [2024-12-10 10:22:00.235469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.077 [2024-12-10 10:22:00.276043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.336 [2024-12-10 10:22:00.309786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.336  [2024-12-10T10:22:00.563Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.336 00:08:25.336 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.336 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733826118 )) 00:08:25.336 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.336 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733826118 )) 00:08:25.336 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.336 [2024-12-10 10:22:00.536839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.336 [2024-12-10 10:22:00.536949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73521 ] 00:08:25.595 [2024-12-10 10:22:00.675579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.595 [2024-12-10 10:22:00.706361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.595 [2024-12-10 10:22:00.732319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.595  [2024-12-10T10:22:01.081Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.854 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733826120 )) 00:08:25.854 00:08:25.854 real 0m1.887s 00:08:25.854 user 0m0.449s 00:08:25.854 sys 0m0.201s 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.854 ************************************ 00:08:25.854 END TEST dd_flag_noatime_forced_aio 00:08:25.854 ************************************ 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:25.854 ************************************ 00:08:25.854 START TEST dd_flags_misc_forced_aio 00:08:25.854 ************************************ 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.854 10:22:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:25.854 [2024-12-10 10:22:01.016663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.854 [2024-12-10 10:22:01.016765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73542 ] 00:08:26.113 [2024-12-10 10:22:01.154634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.113 [2024-12-10 10:22:01.196213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.113 [2024-12-10 10:22:01.227777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.113  [2024-12-10T10:22:01.599Z] Copying: 512/512 [B] (average 500 kBps) 00:08:26.372 00:08:26.372 10:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ elza8dgrul66eejgbw5n80xzceqab92bch84od4kqd2ubj1wal6u4khom85d0hcs0quis8s4ot6z7i25yta59rb5ay1yx1e131i9zjxudvzg9fbnumwnpfsa5lwtl4hq0a7g5heotp3f7x10ilihwz2be6diknldok43ziqkne460mcexwvaj539d9lb5qvtyppx1jdses9a6rf1o5ant7ft5tf4i19l3ij5fng50ccg9fct2t7m7uw4a0q9o413due8ilaoeyms6z08b4nrfvh8mfbcjgd054g3atzcephmmpkdlbsomhu17awf4tm0mgxkk6y6z6c2ngsujayzz9kr7kp953q5qbk75nrnqobpkk4aebil1i2s5bywqup1swrqbtwzadti68e8s2f1sobx9sm8o6r7y052bp5owa3vf39j0wivacsfzy468vmtxnfwk3okyp388naexy914h1aaw6y6hwsnqlyi532jgfunjaj1fojnz1zyvtpgn8f == \e\l\z\a\8\d\g\r\u\l\6\6\e\e\j\g\b\w\5\n\8\0\x\z\c\e\q\a\b\9\2\b\c\h\8\4\o\d\4\k\q\d\2\u\b\j\1\w\a\l\6\u\4\k\h\o\m\8\5\d\0\h\c\s\0\q\u\i\s\8\s\4\o\t\6\z\7\i\2\5\y\t\a\5\9\r\b\5\a\y\1\y\x\1\e\1\3\1\i\9\z\j\x\u\d\v\z\g\9\f\b\n\u\m\w\n\p\f\s\a\5\l\w\t\l\4\h\q\0\a\7\g\5\h\e\o\t\p\3\f\7\x\1\0\i\l\i\h\w\z\2\b\e\6\d\i\k\n\l\d\o\k\4\3\z\i\q\k\n\e\4\6\0\m\c\e\x\w\v\a\j\5\3\9\d\9\l\b\5\q\v\t\y\p\p\x\1\j\d\s\e\s\9\a\6\r\f\1\o\5\a\n\t\7\f\t\5\t\f\4\i\1\9\l\3\i\j\5\f\n\g\5\0\c\c\g\9\f\c\t\2\t\7\m\7\u\w\4\a\0\q\9\o\4\1\3\d\u\e\8\i\l\a\o\e\y\m\s\6\z\0\8\b\4\n\r\f\v\h\8\m\f\b\c\j\g\d\0\5\4\g\3\a\t\z\c\e\p\h\m\m\p\k\d\l\b\s\o\m\h\u\1\7\a\w\f\4\t\m\0\m\g\x\k\k\6\y\6\z\6\c\2\n\g\s\u\j\a\y\z\z\9\k\r\7\k\p\9\5\3\q\5\q\b\k\7\5\n\r\n\q\o\b\p\k\k\4\a\e\b\i\l\1\i\2\s\5\b\y\w\q\u\p\1\s\w\r\q\b\t\w\z\a\d\t\i\6\8\e\8\s\2\f\1\s\o\b\x\9\s\m\8\o\6\r\7\y\0\5\2\b\p\5\o\w\a\3\v\f\3\9\j\0\w\i\v\a\c\s\f\z\y\4\6\8\v\m\t\x\n\f\w\k\3\o\k\y\p\3\8\8\n\a\e\x\y\9\1\4\h\1\a\a\w\6\y\6\h\w\s\n\q\l\y\i\5\3\2\j\g\f\u\n\j\a\j\1\f\o\j\n\z\1\z\y\v\t\p\g\n\8\f ]] 00:08:26.372 10:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.372 10:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:26.372 [2024-12-10 10:22:01.443409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.372 [2024-12-10 10:22:01.443506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73555 ] 00:08:26.372 [2024-12-10 10:22:01.571568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.630 [2024-12-10 10:22:01.604755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.630 [2024-12-10 10:22:01.630458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.630  [2024-12-10T10:22:01.857Z] Copying: 512/512 [B] (average 500 kBps) 00:08:26.630 00:08:26.630 10:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ elza8dgrul66eejgbw5n80xzceqab92bch84od4kqd2ubj1wal6u4khom85d0hcs0quis8s4ot6z7i25yta59rb5ay1yx1e131i9zjxudvzg9fbnumwnpfsa5lwtl4hq0a7g5heotp3f7x10ilihwz2be6diknldok43ziqkne460mcexwvaj539d9lb5qvtyppx1jdses9a6rf1o5ant7ft5tf4i19l3ij5fng50ccg9fct2t7m7uw4a0q9o413due8ilaoeyms6z08b4nrfvh8mfbcjgd054g3atzcephmmpkdlbsomhu17awf4tm0mgxkk6y6z6c2ngsujayzz9kr7kp953q5qbk75nrnqobpkk4aebil1i2s5bywqup1swrqbtwzadti68e8s2f1sobx9sm8o6r7y052bp5owa3vf39j0wivacsfzy468vmtxnfwk3okyp388naexy914h1aaw6y6hwsnqlyi532jgfunjaj1fojnz1zyvtpgn8f == \e\l\z\a\8\d\g\r\u\l\6\6\e\e\j\g\b\w\5\n\8\0\x\z\c\e\q\a\b\9\2\b\c\h\8\4\o\d\4\k\q\d\2\u\b\j\1\w\a\l\6\u\4\k\h\o\m\8\5\d\0\h\c\s\0\q\u\i\s\8\s\4\o\t\6\z\7\i\2\5\y\t\a\5\9\r\b\5\a\y\1\y\x\1\e\1\3\1\i\9\z\j\x\u\d\v\z\g\9\f\b\n\u\m\w\n\p\f\s\a\5\l\w\t\l\4\h\q\0\a\7\g\5\h\e\o\t\p\3\f\7\x\1\0\i\l\i\h\w\z\2\b\e\6\d\i\k\n\l\d\o\k\4\3\z\i\q\k\n\e\4\6\0\m\c\e\x\w\v\a\j\5\3\9\d\9\l\b\5\q\v\t\y\p\p\x\1\j\d\s\e\s\9\a\6\r\f\1\o\5\a\n\t\7\f\t\5\t\f\4\i\1\9\l\3\i\j\5\f\n\g\5\0\c\c\g\9\f\c\t\2\t\7\m\7\u\w\4\a\0\q\9\o\4\1\3\d\u\e\8\i\l\a\o\e\y\m\s\6\z\0\8\b\4\n\r\f\v\h\8\m\f\b\c\j\g\d\0\5\4\g\3\a\t\z\c\e\p\h\m\m\p\k\d\l\b\s\o\m\h\u\1\7\a\w\f\4\t\m\0\m\g\x\k\k\6\y\6\z\6\c\2\n\g\s\u\j\a\y\z\z\9\k\r\7\k\p\9\5\3\q\5\q\b\k\7\5\n\r\n\q\o\b\p\k\k\4\a\e\b\i\l\1\i\2\s\5\b\y\w\q\u\p\1\s\w\r\q\b\t\w\z\a\d\t\i\6\8\e\8\s\2\f\1\s\o\b\x\9\s\m\8\o\6\r\7\y\0\5\2\b\p\5\o\w\a\3\v\f\3\9\j\0\w\i\v\a\c\s\f\z\y\4\6\8\v\m\t\x\n\f\w\k\3\o\k\y\p\3\8\8\n\a\e\x\y\9\1\4\h\1\a\a\w\6\y\6\h\w\s\n\q\l\y\i\5\3\2\j\g\f\u\n\j\a\j\1\f\o\j\n\z\1\z\y\v\t\p\g\n\8\f ]] 00:08:26.630 10:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.630 10:22:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:26.630 [2024-12-10 10:22:01.831055] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.630 [2024-12-10 10:22:01.831157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73557 ] 00:08:26.889 [2024-12-10 10:22:01.959436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.889 [2024-12-10 10:22:01.990354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.889 [2024-12-10 10:22:02.015989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.889  [2024-12-10T10:22:02.375Z] Copying: 512/512 [B] (average 125 kBps) 00:08:27.148 00:08:27.148 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ elza8dgrul66eejgbw5n80xzceqab92bch84od4kqd2ubj1wal6u4khom85d0hcs0quis8s4ot6z7i25yta59rb5ay1yx1e131i9zjxudvzg9fbnumwnpfsa5lwtl4hq0a7g5heotp3f7x10ilihwz2be6diknldok43ziqkne460mcexwvaj539d9lb5qvtyppx1jdses9a6rf1o5ant7ft5tf4i19l3ij5fng50ccg9fct2t7m7uw4a0q9o413due8ilaoeyms6z08b4nrfvh8mfbcjgd054g3atzcephmmpkdlbsomhu17awf4tm0mgxkk6y6z6c2ngsujayzz9kr7kp953q5qbk75nrnqobpkk4aebil1i2s5bywqup1swrqbtwzadti68e8s2f1sobx9sm8o6r7y052bp5owa3vf39j0wivacsfzy468vmtxnfwk3okyp388naexy914h1aaw6y6hwsnqlyi532jgfunjaj1fojnz1zyvtpgn8f == \e\l\z\a\8\d\g\r\u\l\6\6\e\e\j\g\b\w\5\n\8\0\x\z\c\e\q\a\b\9\2\b\c\h\8\4\o\d\4\k\q\d\2\u\b\j\1\w\a\l\6\u\4\k\h\o\m\8\5\d\0\h\c\s\0\q\u\i\s\8\s\4\o\t\6\z\7\i\2\5\y\t\a\5\9\r\b\5\a\y\1\y\x\1\e\1\3\1\i\9\z\j\x\u\d\v\z\g\9\f\b\n\u\m\w\n\p\f\s\a\5\l\w\t\l\4\h\q\0\a\7\g\5\h\e\o\t\p\3\f\7\x\1\0\i\l\i\h\w\z\2\b\e\6\d\i\k\n\l\d\o\k\4\3\z\i\q\k\n\e\4\6\0\m\c\e\x\w\v\a\j\5\3\9\d\9\l\b\5\q\v\t\y\p\p\x\1\j\d\s\e\s\9\a\6\r\f\1\o\5\a\n\t\7\f\t\5\t\f\4\i\1\9\l\3\i\j\5\f\n\g\5\0\c\c\g\9\f\c\t\2\t\7\m\7\u\w\4\a\0\q\9\o\4\1\3\d\u\e\8\i\l\a\o\e\y\m\s\6\z\0\8\b\4\n\r\f\v\h\8\m\f\b\c\j\g\d\0\5\4\g\3\a\t\z\c\e\p\h\m\m\p\k\d\l\b\s\o\m\h\u\1\7\a\w\f\4\t\m\0\m\g\x\k\k\6\y\6\z\6\c\2\n\g\s\u\j\a\y\z\z\9\k\r\7\k\p\9\5\3\q\5\q\b\k\7\5\n\r\n\q\o\b\p\k\k\4\a\e\b\i\l\1\i\2\s\5\b\y\w\q\u\p\1\s\w\r\q\b\t\w\z\a\d\t\i\6\8\e\8\s\2\f\1\s\o\b\x\9\s\m\8\o\6\r\7\y\0\5\2\b\p\5\o\w\a\3\v\f\3\9\j\0\w\i\v\a\c\s\f\z\y\4\6\8\v\m\t\x\n\f\w\k\3\o\k\y\p\3\8\8\n\a\e\x\y\9\1\4\h\1\a\a\w\6\y\6\h\w\s\n\q\l\y\i\5\3\2\j\g\f\u\n\j\a\j\1\f\o\j\n\z\1\z\y\v\t\p\g\n\8\f ]] 00:08:27.148 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.148 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:27.148 [2024-12-10 10:22:02.238351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.148 [2024-12-10 10:22:02.238493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73559 ] 00:08:27.407 [2024-12-10 10:22:02.376476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.407 [2024-12-10 10:22:02.415321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.407 [2024-12-10 10:22:02.443128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.407  [2024-12-10T10:22:02.634Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.407 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ elza8dgrul66eejgbw5n80xzceqab92bch84od4kqd2ubj1wal6u4khom85d0hcs0quis8s4ot6z7i25yta59rb5ay1yx1e131i9zjxudvzg9fbnumwnpfsa5lwtl4hq0a7g5heotp3f7x10ilihwz2be6diknldok43ziqkne460mcexwvaj539d9lb5qvtyppx1jdses9a6rf1o5ant7ft5tf4i19l3ij5fng50ccg9fct2t7m7uw4a0q9o413due8ilaoeyms6z08b4nrfvh8mfbcjgd054g3atzcephmmpkdlbsomhu17awf4tm0mgxkk6y6z6c2ngsujayzz9kr7kp953q5qbk75nrnqobpkk4aebil1i2s5bywqup1swrqbtwzadti68e8s2f1sobx9sm8o6r7y052bp5owa3vf39j0wivacsfzy468vmtxnfwk3okyp388naexy914h1aaw6y6hwsnqlyi532jgfunjaj1fojnz1zyvtpgn8f == \e\l\z\a\8\d\g\r\u\l\6\6\e\e\j\g\b\w\5\n\8\0\x\z\c\e\q\a\b\9\2\b\c\h\8\4\o\d\4\k\q\d\2\u\b\j\1\w\a\l\6\u\4\k\h\o\m\8\5\d\0\h\c\s\0\q\u\i\s\8\s\4\o\t\6\z\7\i\2\5\y\t\a\5\9\r\b\5\a\y\1\y\x\1\e\1\3\1\i\9\z\j\x\u\d\v\z\g\9\f\b\n\u\m\w\n\p\f\s\a\5\l\w\t\l\4\h\q\0\a\7\g\5\h\e\o\t\p\3\f\7\x\1\0\i\l\i\h\w\z\2\b\e\6\d\i\k\n\l\d\o\k\4\3\z\i\q\k\n\e\4\6\0\m\c\e\x\w\v\a\j\5\3\9\d\9\l\b\5\q\v\t\y\p\p\x\1\j\d\s\e\s\9\a\6\r\f\1\o\5\a\n\t\7\f\t\5\t\f\4\i\1\9\l\3\i\j\5\f\n\g\5\0\c\c\g\9\f\c\t\2\t\7\m\7\u\w\4\a\0\q\9\o\4\1\3\d\u\e\8\i\l\a\o\e\y\m\s\6\z\0\8\b\4\n\r\f\v\h\8\m\f\b\c\j\g\d\0\5\4\g\3\a\t\z\c\e\p\h\m\m\p\k\d\l\b\s\o\m\h\u\1\7\a\w\f\4\t\m\0\m\g\x\k\k\6\y\6\z\6\c\2\n\g\s\u\j\a\y\z\z\9\k\r\7\k\p\9\5\3\q\5\q\b\k\7\5\n\r\n\q\o\b\p\k\k\4\a\e\b\i\l\1\i\2\s\5\b\y\w\q\u\p\1\s\w\r\q\b\t\w\z\a\d\t\i\6\8\e\8\s\2\f\1\s\o\b\x\9\s\m\8\o\6\r\7\y\0\5\2\b\p\5\o\w\a\3\v\f\3\9\j\0\w\i\v\a\c\s\f\z\y\4\6\8\v\m\t\x\n\f\w\k\3\o\k\y\p\3\8\8\n\a\e\x\y\9\1\4\h\1\a\a\w\6\y\6\h\w\s\n\q\l\y\i\5\3\2\j\g\f\u\n\j\a\j\1\f\o\j\n\z\1\z\y\v\t\p\g\n\8\f ]] 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.407 10:22:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:27.666 [2024-12-10 10:22:02.678669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.666 [2024-12-10 10:22:02.678771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73572 ] 00:08:27.666 [2024-12-10 10:22:02.817101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.666 [2024-12-10 10:22:02.849307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.666 [2024-12-10 10:22:02.875685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.666  [2024-12-10T10:22:03.152Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.925 00:08:27.925 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4izmfr97zqbjzw54tu3nq7vb2nmxx47eeaztwirfr10rmhwi7ck6a8yvc5rc0ddfpeaik15kh0ph8xeaqo8wvyzzuyh4cm8dujjhvyzwrphzwi0g4frgpothpgbvkoprc5v3n1ab4vb36twqpu7x6uzwvp0olunof3ik1fq9hufwfh8icnd2pxxc0x214w74fo8v913s6yqz2zq68ny2322x7weaf6d49brzhywvfymc3mc0e21ogqz5adc2kaihwpqu1okz1vwkcvkcy6sjdwl1yoj06jrw8hn2ca3j0i2035bj58lf4ozouyrevz5o6poupy0lzn9k7ooydo43xbhv4us8pa43wt74eh6xmxouqwd9xzxgayne0akbx5vmcsr5j38ab5a348les4thz64kmvpjzby6u1le9w1ut56k3bqv9ceq8yuyowrfe3quiv1a2g23xgp9rtjdfey970ohdj05j32xtlybdmt0l4sa0eu8mg3s6p63ctq3hvk9 == \4\i\z\m\f\r\9\7\z\q\b\j\z\w\5\4\t\u\3\n\q\7\v\b\2\n\m\x\x\4\7\e\e\a\z\t\w\i\r\f\r\1\0\r\m\h\w\i\7\c\k\6\a\8\y\v\c\5\r\c\0\d\d\f\p\e\a\i\k\1\5\k\h\0\p\h\8\x\e\a\q\o\8\w\v\y\z\z\u\y\h\4\c\m\8\d\u\j\j\h\v\y\z\w\r\p\h\z\w\i\0\g\4\f\r\g\p\o\t\h\p\g\b\v\k\o\p\r\c\5\v\3\n\1\a\b\4\v\b\3\6\t\w\q\p\u\7\x\6\u\z\w\v\p\0\o\l\u\n\o\f\3\i\k\1\f\q\9\h\u\f\w\f\h\8\i\c\n\d\2\p\x\x\c\0\x\2\1\4\w\7\4\f\o\8\v\9\1\3\s\6\y\q\z\2\z\q\6\8\n\y\2\3\2\2\x\7\w\e\a\f\6\d\4\9\b\r\z\h\y\w\v\f\y\m\c\3\m\c\0\e\2\1\o\g\q\z\5\a\d\c\2\k\a\i\h\w\p\q\u\1\o\k\z\1\v\w\k\c\v\k\c\y\6\s\j\d\w\l\1\y\o\j\0\6\j\r\w\8\h\n\2\c\a\3\j\0\i\2\0\3\5\b\j\5\8\l\f\4\o\z\o\u\y\r\e\v\z\5\o\6\p\o\u\p\y\0\l\z\n\9\k\7\o\o\y\d\o\4\3\x\b\h\v\4\u\s\8\p\a\4\3\w\t\7\4\e\h\6\x\m\x\o\u\q\w\d\9\x\z\x\g\a\y\n\e\0\a\k\b\x\5\v\m\c\s\r\5\j\3\8\a\b\5\a\3\4\8\l\e\s\4\t\h\z\6\4\k\m\v\p\j\z\b\y\6\u\1\l\e\9\w\1\u\t\5\6\k\3\b\q\v\9\c\e\q\8\y\u\y\o\w\r\f\e\3\q\u\i\v\1\a\2\g\2\3\x\g\p\9\r\t\j\d\f\e\y\9\7\0\o\h\d\j\0\5\j\3\2\x\t\l\y\b\d\m\t\0\l\4\s\a\0\e\u\8\m\g\3\s\6\p\6\3\c\t\q\3\h\v\k\9 ]] 00:08:27.925 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.925 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:27.925 [2024-12-10 10:22:03.085687] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.925 [2024-12-10 10:22:03.085803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73574 ] 00:08:28.184 [2024-12-10 10:22:03.223549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.184 [2024-12-10 10:22:03.255681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.184 [2024-12-10 10:22:03.282600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.184  [2024-12-10T10:22:03.669Z] Copying: 512/512 [B] (average 500 kBps) 00:08:28.442 00:08:28.442 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4izmfr97zqbjzw54tu3nq7vb2nmxx47eeaztwirfr10rmhwi7ck6a8yvc5rc0ddfpeaik15kh0ph8xeaqo8wvyzzuyh4cm8dujjhvyzwrphzwi0g4frgpothpgbvkoprc5v3n1ab4vb36twqpu7x6uzwvp0olunof3ik1fq9hufwfh8icnd2pxxc0x214w74fo8v913s6yqz2zq68ny2322x7weaf6d49brzhywvfymc3mc0e21ogqz5adc2kaihwpqu1okz1vwkcvkcy6sjdwl1yoj06jrw8hn2ca3j0i2035bj58lf4ozouyrevz5o6poupy0lzn9k7ooydo43xbhv4us8pa43wt74eh6xmxouqwd9xzxgayne0akbx5vmcsr5j38ab5a348les4thz64kmvpjzby6u1le9w1ut56k3bqv9ceq8yuyowrfe3quiv1a2g23xgp9rtjdfey970ohdj05j32xtlybdmt0l4sa0eu8mg3s6p63ctq3hvk9 == \4\i\z\m\f\r\9\7\z\q\b\j\z\w\5\4\t\u\3\n\q\7\v\b\2\n\m\x\x\4\7\e\e\a\z\t\w\i\r\f\r\1\0\r\m\h\w\i\7\c\k\6\a\8\y\v\c\5\r\c\0\d\d\f\p\e\a\i\k\1\5\k\h\0\p\h\8\x\e\a\q\o\8\w\v\y\z\z\u\y\h\4\c\m\8\d\u\j\j\h\v\y\z\w\r\p\h\z\w\i\0\g\4\f\r\g\p\o\t\h\p\g\b\v\k\o\p\r\c\5\v\3\n\1\a\b\4\v\b\3\6\t\w\q\p\u\7\x\6\u\z\w\v\p\0\o\l\u\n\o\f\3\i\k\1\f\q\9\h\u\f\w\f\h\8\i\c\n\d\2\p\x\x\c\0\x\2\1\4\w\7\4\f\o\8\v\9\1\3\s\6\y\q\z\2\z\q\6\8\n\y\2\3\2\2\x\7\w\e\a\f\6\d\4\9\b\r\z\h\y\w\v\f\y\m\c\3\m\c\0\e\2\1\o\g\q\z\5\a\d\c\2\k\a\i\h\w\p\q\u\1\o\k\z\1\v\w\k\c\v\k\c\y\6\s\j\d\w\l\1\y\o\j\0\6\j\r\w\8\h\n\2\c\a\3\j\0\i\2\0\3\5\b\j\5\8\l\f\4\o\z\o\u\y\r\e\v\z\5\o\6\p\o\u\p\y\0\l\z\n\9\k\7\o\o\y\d\o\4\3\x\b\h\v\4\u\s\8\p\a\4\3\w\t\7\4\e\h\6\x\m\x\o\u\q\w\d\9\x\z\x\g\a\y\n\e\0\a\k\b\x\5\v\m\c\s\r\5\j\3\8\a\b\5\a\3\4\8\l\e\s\4\t\h\z\6\4\k\m\v\p\j\z\b\y\6\u\1\l\e\9\w\1\u\t\5\6\k\3\b\q\v\9\c\e\q\8\y\u\y\o\w\r\f\e\3\q\u\i\v\1\a\2\g\2\3\x\g\p\9\r\t\j\d\f\e\y\9\7\0\o\h\d\j\0\5\j\3\2\x\t\l\y\b\d\m\t\0\l\4\s\a\0\e\u\8\m\g\3\s\6\p\6\3\c\t\q\3\h\v\k\9 ]] 00:08:28.442 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.442 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:28.442 [2024-12-10 10:22:03.482626] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:28.442 [2024-12-10 10:22:03.482697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73583 ] 00:08:28.442 [2024-12-10 10:22:03.614009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.442 [2024-12-10 10:22:03.648721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.701 [2024-12-10 10:22:03.676267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.701  [2024-12-10T10:22:03.928Z] Copying: 512/512 [B] (average 500 kBps) 00:08:28.701 00:08:28.701 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4izmfr97zqbjzw54tu3nq7vb2nmxx47eeaztwirfr10rmhwi7ck6a8yvc5rc0ddfpeaik15kh0ph8xeaqo8wvyzzuyh4cm8dujjhvyzwrphzwi0g4frgpothpgbvkoprc5v3n1ab4vb36twqpu7x6uzwvp0olunof3ik1fq9hufwfh8icnd2pxxc0x214w74fo8v913s6yqz2zq68ny2322x7weaf6d49brzhywvfymc3mc0e21ogqz5adc2kaihwpqu1okz1vwkcvkcy6sjdwl1yoj06jrw8hn2ca3j0i2035bj58lf4ozouyrevz5o6poupy0lzn9k7ooydo43xbhv4us8pa43wt74eh6xmxouqwd9xzxgayne0akbx5vmcsr5j38ab5a348les4thz64kmvpjzby6u1le9w1ut56k3bqv9ceq8yuyowrfe3quiv1a2g23xgp9rtjdfey970ohdj05j32xtlybdmt0l4sa0eu8mg3s6p63ctq3hvk9 == \4\i\z\m\f\r\9\7\z\q\b\j\z\w\5\4\t\u\3\n\q\7\v\b\2\n\m\x\x\4\7\e\e\a\z\t\w\i\r\f\r\1\0\r\m\h\w\i\7\c\k\6\a\8\y\v\c\5\r\c\0\d\d\f\p\e\a\i\k\1\5\k\h\0\p\h\8\x\e\a\q\o\8\w\v\y\z\z\u\y\h\4\c\m\8\d\u\j\j\h\v\y\z\w\r\p\h\z\w\i\0\g\4\f\r\g\p\o\t\h\p\g\b\v\k\o\p\r\c\5\v\3\n\1\a\b\4\v\b\3\6\t\w\q\p\u\7\x\6\u\z\w\v\p\0\o\l\u\n\o\f\3\i\k\1\f\q\9\h\u\f\w\f\h\8\i\c\n\d\2\p\x\x\c\0\x\2\1\4\w\7\4\f\o\8\v\9\1\3\s\6\y\q\z\2\z\q\6\8\n\y\2\3\2\2\x\7\w\e\a\f\6\d\4\9\b\r\z\h\y\w\v\f\y\m\c\3\m\c\0\e\2\1\o\g\q\z\5\a\d\c\2\k\a\i\h\w\p\q\u\1\o\k\z\1\v\w\k\c\v\k\c\y\6\s\j\d\w\l\1\y\o\j\0\6\j\r\w\8\h\n\2\c\a\3\j\0\i\2\0\3\5\b\j\5\8\l\f\4\o\z\o\u\y\r\e\v\z\5\o\6\p\o\u\p\y\0\l\z\n\9\k\7\o\o\y\d\o\4\3\x\b\h\v\4\u\s\8\p\a\4\3\w\t\7\4\e\h\6\x\m\x\o\u\q\w\d\9\x\z\x\g\a\y\n\e\0\a\k\b\x\5\v\m\c\s\r\5\j\3\8\a\b\5\a\3\4\8\l\e\s\4\t\h\z\6\4\k\m\v\p\j\z\b\y\6\u\1\l\e\9\w\1\u\t\5\6\k\3\b\q\v\9\c\e\q\8\y\u\y\o\w\r\f\e\3\q\u\i\v\1\a\2\g\2\3\x\g\p\9\r\t\j\d\f\e\y\9\7\0\o\h\d\j\0\5\j\3\2\x\t\l\y\b\d\m\t\0\l\4\s\a\0\e\u\8\m\g\3\s\6\p\6\3\c\t\q\3\h\v\k\9 ]] 00:08:28.701 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.701 10:22:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:28.701 [2024-12-10 10:22:03.880431] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:28.702 [2024-12-10 10:22:03.880525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73589 ] 00:08:28.960 [2024-12-10 10:22:04.017521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.960 [2024-12-10 10:22:04.048218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.960 [2024-12-10 10:22:04.073944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.960  [2024-12-10T10:22:04.446Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.219 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4izmfr97zqbjzw54tu3nq7vb2nmxx47eeaztwirfr10rmhwi7ck6a8yvc5rc0ddfpeaik15kh0ph8xeaqo8wvyzzuyh4cm8dujjhvyzwrphzwi0g4frgpothpgbvkoprc5v3n1ab4vb36twqpu7x6uzwvp0olunof3ik1fq9hufwfh8icnd2pxxc0x214w74fo8v913s6yqz2zq68ny2322x7weaf6d49brzhywvfymc3mc0e21ogqz5adc2kaihwpqu1okz1vwkcvkcy6sjdwl1yoj06jrw8hn2ca3j0i2035bj58lf4ozouyrevz5o6poupy0lzn9k7ooydo43xbhv4us8pa43wt74eh6xmxouqwd9xzxgayne0akbx5vmcsr5j38ab5a348les4thz64kmvpjzby6u1le9w1ut56k3bqv9ceq8yuyowrfe3quiv1a2g23xgp9rtjdfey970ohdj05j32xtlybdmt0l4sa0eu8mg3s6p63ctq3hvk9 == \4\i\z\m\f\r\9\7\z\q\b\j\z\w\5\4\t\u\3\n\q\7\v\b\2\n\m\x\x\4\7\e\e\a\z\t\w\i\r\f\r\1\0\r\m\h\w\i\7\c\k\6\a\8\y\v\c\5\r\c\0\d\d\f\p\e\a\i\k\1\5\k\h\0\p\h\8\x\e\a\q\o\8\w\v\y\z\z\u\y\h\4\c\m\8\d\u\j\j\h\v\y\z\w\r\p\h\z\w\i\0\g\4\f\r\g\p\o\t\h\p\g\b\v\k\o\p\r\c\5\v\3\n\1\a\b\4\v\b\3\6\t\w\q\p\u\7\x\6\u\z\w\v\p\0\o\l\u\n\o\f\3\i\k\1\f\q\9\h\u\f\w\f\h\8\i\c\n\d\2\p\x\x\c\0\x\2\1\4\w\7\4\f\o\8\v\9\1\3\s\6\y\q\z\2\z\q\6\8\n\y\2\3\2\2\x\7\w\e\a\f\6\d\4\9\b\r\z\h\y\w\v\f\y\m\c\3\m\c\0\e\2\1\o\g\q\z\5\a\d\c\2\k\a\i\h\w\p\q\u\1\o\k\z\1\v\w\k\c\v\k\c\y\6\s\j\d\w\l\1\y\o\j\0\6\j\r\w\8\h\n\2\c\a\3\j\0\i\2\0\3\5\b\j\5\8\l\f\4\o\z\o\u\y\r\e\v\z\5\o\6\p\o\u\p\y\0\l\z\n\9\k\7\o\o\y\d\o\4\3\x\b\h\v\4\u\s\8\p\a\4\3\w\t\7\4\e\h\6\x\m\x\o\u\q\w\d\9\x\z\x\g\a\y\n\e\0\a\k\b\x\5\v\m\c\s\r\5\j\3\8\a\b\5\a\3\4\8\l\e\s\4\t\h\z\6\4\k\m\v\p\j\z\b\y\6\u\1\l\e\9\w\1\u\t\5\6\k\3\b\q\v\9\c\e\q\8\y\u\y\o\w\r\f\e\3\q\u\i\v\1\a\2\g\2\3\x\g\p\9\r\t\j\d\f\e\y\9\7\0\o\h\d\j\0\5\j\3\2\x\t\l\y\b\d\m\t\0\l\4\s\a\0\e\u\8\m\g\3\s\6\p\6\3\c\t\q\3\h\v\k\9 ]] 00:08:29.220 00:08:29.220 real 0m3.274s 00:08:29.220 user 0m1.606s 00:08:29.220 sys 0m0.711s 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:29.220 ************************************ 00:08:29.220 END TEST dd_flags_misc_forced_aio 00:08:29.220 ************************************ 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:29.220 00:08:29.220 real 0m15.714s 00:08:29.220 user 0m6.711s 00:08:29.220 sys 0m4.297s 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.220 10:22:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:29.220 ************************************ 00:08:29.220 END TEST spdk_dd_posix 00:08:29.220 ************************************ 00:08:29.220 10:22:04 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:29.220 10:22:04 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.220 10:22:04 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.220 10:22:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:29.220 ************************************ 00:08:29.220 START TEST spdk_dd_malloc 00:08:29.220 ************************************ 00:08:29.220 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:29.220 * Looking for test storage... 00:08:29.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:29.220 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.220 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.220 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.479 --rc genhtml_branch_coverage=1 00:08:29.479 --rc genhtml_function_coverage=1 00:08:29.479 --rc genhtml_legend=1 00:08:29.479 --rc geninfo_all_blocks=1 00:08:29.479 --rc geninfo_unexecuted_blocks=1 00:08:29.479 00:08:29.479 ' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.479 --rc genhtml_branch_coverage=1 00:08:29.479 --rc genhtml_function_coverage=1 00:08:29.479 --rc genhtml_legend=1 00:08:29.479 --rc geninfo_all_blocks=1 00:08:29.479 --rc geninfo_unexecuted_blocks=1 00:08:29.479 00:08:29.479 ' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.479 --rc genhtml_branch_coverage=1 00:08:29.479 --rc genhtml_function_coverage=1 00:08:29.479 --rc genhtml_legend=1 00:08:29.479 --rc geninfo_all_blocks=1 00:08:29.479 --rc geninfo_unexecuted_blocks=1 00:08:29.479 00:08:29.479 ' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.479 --rc genhtml_branch_coverage=1 00:08:29.479 --rc genhtml_function_coverage=1 00:08:29.479 --rc genhtml_legend=1 00:08:29.479 --rc geninfo_all_blocks=1 00:08:29.479 --rc geninfo_unexecuted_blocks=1 00:08:29.479 00:08:29.479 ' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.479 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:29.480 ************************************ 00:08:29.480 START TEST dd_malloc_copy 00:08:29.480 ************************************ 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.480 10:22:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.480 [2024-12-10 10:22:04.575987] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:29.480 [2024-12-10 10:22:04.576071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73670 ] 00:08:29.480 { 00:08:29.480 "subsystems": [ 00:08:29.480 { 00:08:29.480 "subsystem": "bdev", 00:08:29.480 "config": [ 00:08:29.480 { 00:08:29.480 "params": { 00:08:29.480 "block_size": 512, 00:08:29.480 "num_blocks": 1048576, 00:08:29.480 "name": "malloc0" 00:08:29.480 }, 00:08:29.480 "method": "bdev_malloc_create" 00:08:29.480 }, 00:08:29.480 { 00:08:29.480 "params": { 00:08:29.480 "block_size": 512, 00:08:29.480 "num_blocks": 1048576, 00:08:29.480 "name": "malloc1" 00:08:29.480 }, 00:08:29.480 "method": "bdev_malloc_create" 00:08:29.480 }, 00:08:29.480 { 00:08:29.480 "method": "bdev_wait_for_examine" 00:08:29.480 } 00:08:29.480 ] 00:08:29.480 } 00:08:29.480 ] 00:08:29.480 } 00:08:29.737 [2024-12-10 10:22:04.709072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.737 [2024-12-10 10:22:04.741314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.737 [2024-12-10 10:22:04.768793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.115  [2024-12-10T10:22:07.279Z] Copying: 237/512 [MB] (237 MBps) [2024-12-10T10:22:07.279Z] Copying: 443/512 [MB] (206 MBps) [2024-12-10T10:22:07.861Z] Copying: 512/512 [MB] (average 220 MBps) 00:08:32.634 00:08:32.634 10:22:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:32.634 10:22:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:32.634 10:22:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:32.634 10:22:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:32.634 [2024-12-10 10:22:07.636235] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:32.634 [2024-12-10 10:22:07.636423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73708 ] 00:08:32.634 { 00:08:32.634 "subsystems": [ 00:08:32.634 { 00:08:32.634 "subsystem": "bdev", 00:08:32.634 "config": [ 00:08:32.634 { 00:08:32.634 "params": { 00:08:32.634 "block_size": 512, 00:08:32.634 "num_blocks": 1048576, 00:08:32.634 "name": "malloc0" 00:08:32.634 }, 00:08:32.634 "method": "bdev_malloc_create" 00:08:32.634 }, 00:08:32.634 { 00:08:32.634 "params": { 00:08:32.634 "block_size": 512, 00:08:32.634 "num_blocks": 1048576, 00:08:32.634 "name": "malloc1" 00:08:32.634 }, 00:08:32.634 "method": "bdev_malloc_create" 00:08:32.634 }, 00:08:32.634 { 00:08:32.634 "method": "bdev_wait_for_examine" 00:08:32.634 } 00:08:32.634 ] 00:08:32.634 } 00:08:32.634 ] 00:08:32.634 } 00:08:32.634 [2024-12-10 10:22:07.779891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.634 [2024-12-10 10:22:07.813421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.634 [2024-12-10 10:22:07.843875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.024  [2024-12-10T10:22:10.187Z] Copying: 234/512 [MB] (234 MBps) [2024-12-10T10:22:10.446Z] Copying: 443/512 [MB] (209 MBps) [2024-12-10T10:22:10.705Z] Copying: 512/512 [MB] (average 221 MBps) 00:08:35.478 00:08:35.478 00:08:35.478 real 0m6.131s 00:08:35.478 user 0m5.513s 00:08:35.478 sys 0m0.480s 00:08:35.478 10:22:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.478 10:22:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:35.478 ************************************ 00:08:35.478 END TEST dd_malloc_copy 00:08:35.478 ************************************ 00:08:35.478 00:08:35.478 real 0m6.372s 00:08:35.478 user 0m5.651s 00:08:35.478 sys 0m0.588s 00:08:35.478 10:22:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.478 10:22:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:35.478 ************************************ 00:08:35.478 END TEST spdk_dd_malloc 00:08:35.478 ************************************ 00:08:35.737 10:22:10 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:35.737 10:22:10 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:35.737 10:22:10 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.737 10:22:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:35.737 ************************************ 00:08:35.737 START TEST spdk_dd_bdev_to_bdev 00:08:35.737 ************************************ 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:35.737 * Looking for test storage... 00:08:35.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.737 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:35.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.738 --rc genhtml_branch_coverage=1 00:08:35.738 --rc genhtml_function_coverage=1 00:08:35.738 --rc genhtml_legend=1 00:08:35.738 --rc geninfo_all_blocks=1 00:08:35.738 --rc geninfo_unexecuted_blocks=1 00:08:35.738 00:08:35.738 ' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:35.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.738 --rc genhtml_branch_coverage=1 00:08:35.738 --rc genhtml_function_coverage=1 00:08:35.738 --rc genhtml_legend=1 00:08:35.738 --rc geninfo_all_blocks=1 00:08:35.738 --rc geninfo_unexecuted_blocks=1 00:08:35.738 00:08:35.738 ' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:35.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.738 --rc genhtml_branch_coverage=1 00:08:35.738 --rc genhtml_function_coverage=1 00:08:35.738 --rc genhtml_legend=1 00:08:35.738 --rc geninfo_all_blocks=1 00:08:35.738 --rc geninfo_unexecuted_blocks=1 00:08:35.738 00:08:35.738 ' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:35.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.738 --rc genhtml_branch_coverage=1 00:08:35.738 --rc genhtml_function_coverage=1 00:08:35.738 --rc genhtml_legend=1 00:08:35.738 --rc geninfo_all_blocks=1 00:08:35.738 --rc geninfo_unexecuted_blocks=1 00:08:35.738 00:08:35.738 ' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:35.738 ************************************ 00:08:35.738 START TEST dd_inflate_file 00:08:35.738 ************************************ 00:08:35.738 10:22:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:36.002 [2024-12-10 10:22:11.003140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:36.002 [2024-12-10 10:22:11.003247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73822 ] 00:08:36.002 [2024-12-10 10:22:11.141083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.002 [2024-12-10 10:22:11.173263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.002 [2024-12-10 10:22:11.200212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.261  [2024-12-10T10:22:11.488Z] Copying: 64/64 [MB] (average 1560 MBps) 00:08:36.261 00:08:36.261 00:08:36.261 real 0m0.433s 00:08:36.261 user 0m0.234s 00:08:36.261 sys 0m0.218s 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:36.261 ************************************ 00:08:36.261 END TEST dd_inflate_file 00:08:36.261 ************************************ 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:36.261 ************************************ 00:08:36.261 START TEST dd_copy_to_out_bdev 00:08:36.261 ************************************ 00:08:36.261 10:22:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:36.261 { 00:08:36.261 "subsystems": [ 00:08:36.261 { 00:08:36.261 "subsystem": "bdev", 00:08:36.261 "config": [ 00:08:36.261 { 00:08:36.261 "params": { 00:08:36.261 "trtype": "pcie", 00:08:36.261 "traddr": "0000:00:10.0", 00:08:36.261 "name": "Nvme0" 00:08:36.261 }, 00:08:36.261 "method": "bdev_nvme_attach_controller" 00:08:36.261 }, 00:08:36.261 { 00:08:36.261 "params": { 00:08:36.261 "trtype": "pcie", 00:08:36.261 "traddr": "0000:00:11.0", 00:08:36.261 "name": "Nvme1" 00:08:36.261 }, 00:08:36.261 "method": "bdev_nvme_attach_controller" 00:08:36.261 }, 00:08:36.261 { 00:08:36.261 "method": "bdev_wait_for_examine" 00:08:36.261 } 00:08:36.261 ] 00:08:36.261 } 00:08:36.261 ] 00:08:36.261 } 00:08:36.521 [2024-12-10 10:22:11.494237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:36.521 [2024-12-10 10:22:11.494345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73850 ] 00:08:36.521 [2024-12-10 10:22:11.631001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.521 [2024-12-10 10:22:11.661994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.521 [2024-12-10 10:22:11.688357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.899  [2024-12-10T10:22:13.126Z] Copying: 54/64 [MB] (54 MBps) [2024-12-10T10:22:13.385Z] Copying: 64/64 [MB] (average 55 MBps) 00:08:38.158 00:08:38.158 00:08:38.158 real 0m1.752s 00:08:38.158 user 0m1.569s 00:08:38.158 sys 0m1.414s 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:38.158 ************************************ 00:08:38.158 END TEST dd_copy_to_out_bdev 00:08:38.158 ************************************ 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:38.158 ************************************ 00:08:38.158 START TEST dd_offset_magic 00:08:38.158 ************************************ 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:38.158 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:38.158 [2024-12-10 10:22:13.311495] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.158 [2024-12-10 10:22:13.311596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73895 ] 00:08:38.158 { 00:08:38.158 "subsystems": [ 00:08:38.158 { 00:08:38.158 "subsystem": "bdev", 00:08:38.158 "config": [ 00:08:38.158 { 00:08:38.158 "params": { 00:08:38.158 "trtype": "pcie", 00:08:38.158 "traddr": "0000:00:10.0", 00:08:38.158 "name": "Nvme0" 00:08:38.158 }, 00:08:38.158 "method": "bdev_nvme_attach_controller" 00:08:38.158 }, 00:08:38.158 { 00:08:38.158 "params": { 00:08:38.158 "trtype": "pcie", 00:08:38.158 "traddr": "0000:00:11.0", 00:08:38.158 "name": "Nvme1" 00:08:38.158 }, 00:08:38.158 "method": "bdev_nvme_attach_controller" 00:08:38.158 }, 00:08:38.158 { 00:08:38.158 "method": "bdev_wait_for_examine" 00:08:38.158 } 00:08:38.158 ] 00:08:38.158 } 00:08:38.158 ] 00:08:38.158 } 00:08:38.417 [2024-12-10 10:22:13.453132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.417 [2024-12-10 10:22:13.488926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.417 [2024-12-10 10:22:13.515907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.676  [2024-12-10T10:22:13.903Z] Copying: 65/65 [MB] (average 1140 MBps) 00:08:38.676 00:08:38.676 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:38.676 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:38.676 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:38.676 10:22:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:38.936 { 00:08:38.936 "subsystems": [ 00:08:38.936 { 00:08:38.936 "subsystem": "bdev", 00:08:38.936 "config": [ 00:08:38.936 { 00:08:38.936 "params": { 00:08:38.936 "trtype": "pcie", 00:08:38.936 "traddr": "0000:00:10.0", 00:08:38.936 "name": "Nvme0" 00:08:38.936 }, 00:08:38.936 "method": "bdev_nvme_attach_controller" 00:08:38.936 }, 00:08:38.936 { 00:08:38.936 "params": { 00:08:38.936 "trtype": "pcie", 00:08:38.936 "traddr": "0000:00:11.0", 00:08:38.936 "name": "Nvme1" 00:08:38.936 }, 00:08:38.936 "method": "bdev_nvme_attach_controller" 00:08:38.936 }, 00:08:38.936 { 00:08:38.936 "method": "bdev_wait_for_examine" 00:08:38.936 } 00:08:38.936 ] 00:08:38.936 } 00:08:38.936 ] 00:08:38.936 } 00:08:38.936 [2024-12-10 10:22:13.950777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.936 [2024-12-10 10:22:13.950886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73910 ] 00:08:38.936 [2024-12-10 10:22:14.090143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.936 [2024-12-10 10:22:14.121958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.936 [2024-12-10 10:22:14.149204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.195  [2024-12-10T10:22:14.681Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:39.455 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:39.455 10:22:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:39.455 [2024-12-10 10:22:14.493426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:39.455 [2024-12-10 10:22:14.493533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73926 ] 00:08:39.455 { 00:08:39.455 "subsystems": [ 00:08:39.455 { 00:08:39.455 "subsystem": "bdev", 00:08:39.455 "config": [ 00:08:39.455 { 00:08:39.455 "params": { 00:08:39.455 "trtype": "pcie", 00:08:39.455 "traddr": "0000:00:10.0", 00:08:39.455 "name": "Nvme0" 00:08:39.455 }, 00:08:39.455 "method": "bdev_nvme_attach_controller" 00:08:39.455 }, 00:08:39.455 { 00:08:39.455 "params": { 00:08:39.455 "trtype": "pcie", 00:08:39.455 "traddr": "0000:00:11.0", 00:08:39.455 "name": "Nvme1" 00:08:39.455 }, 00:08:39.455 "method": "bdev_nvme_attach_controller" 00:08:39.455 }, 00:08:39.455 { 00:08:39.455 "method": "bdev_wait_for_examine" 00:08:39.455 } 00:08:39.455 ] 00:08:39.455 } 00:08:39.455 ] 00:08:39.455 } 00:08:39.455 [2024-12-10 10:22:14.631846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.455 [2024-12-10 10:22:14.667081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.714 [2024-12-10 10:22:14.695196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.714  [2024-12-10T10:22:15.200Z] Copying: 65/65 [MB] (average 1140 MBps) 00:08:39.973 00:08:39.973 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:39.973 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:39.973 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:39.973 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:39.973 { 00:08:39.973 "subsystems": [ 00:08:39.973 { 00:08:39.973 "subsystem": "bdev", 00:08:39.973 "config": [ 00:08:39.973 { 00:08:39.973 "params": { 00:08:39.973 "trtype": "pcie", 00:08:39.973 "traddr": "0000:00:10.0", 00:08:39.973 "name": "Nvme0" 00:08:39.973 }, 00:08:39.973 "method": "bdev_nvme_attach_controller" 00:08:39.973 }, 00:08:39.973 { 00:08:39.973 "params": { 00:08:39.973 "trtype": "pcie", 00:08:39.973 "traddr": "0000:00:11.0", 00:08:39.973 "name": "Nvme1" 00:08:39.973 }, 00:08:39.973 "method": "bdev_nvme_attach_controller" 00:08:39.973 }, 00:08:39.973 { 00:08:39.973 "method": "bdev_wait_for_examine" 00:08:39.973 } 00:08:39.973 ] 00:08:39.973 } 00:08:39.973 ] 00:08:39.973 } 00:08:39.973 [2024-12-10 10:22:15.131248] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:39.973 [2024-12-10 10:22:15.131348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73946 ] 00:08:40.233 [2024-12-10 10:22:15.268258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.233 [2024-12-10 10:22:15.299162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.233 [2024-12-10 10:22:15.325679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.491  [2024-12-10T10:22:15.718Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:40.491 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:40.491 00:08:40.491 real 0m2.382s 00:08:40.491 user 0m1.700s 00:08:40.491 sys 0m0.653s 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:40.491 ************************************ 00:08:40.491 END TEST dd_offset_magic 00:08:40.491 ************************************ 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:40.491 10:22:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:40.750 [2024-12-10 10:22:15.733589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:40.750 [2024-12-10 10:22:15.733699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73977 ] 00:08:40.750 { 00:08:40.750 "subsystems": [ 00:08:40.750 { 00:08:40.750 "subsystem": "bdev", 00:08:40.750 "config": [ 00:08:40.750 { 00:08:40.750 "params": { 00:08:40.750 "trtype": "pcie", 00:08:40.750 "traddr": "0000:00:10.0", 00:08:40.750 "name": "Nvme0" 00:08:40.750 }, 00:08:40.750 "method": "bdev_nvme_attach_controller" 00:08:40.750 }, 00:08:40.750 { 00:08:40.750 "params": { 00:08:40.750 "trtype": "pcie", 00:08:40.750 "traddr": "0000:00:11.0", 00:08:40.750 "name": "Nvme1" 00:08:40.750 }, 00:08:40.750 "method": "bdev_nvme_attach_controller" 00:08:40.750 }, 00:08:40.750 { 00:08:40.750 "method": "bdev_wait_for_examine" 00:08:40.750 } 00:08:40.750 ] 00:08:40.750 } 00:08:40.750 ] 00:08:40.750 } 00:08:40.750 [2024-12-10 10:22:15.874614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.750 [2024-12-10 10:22:15.912064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.750 [2024-12-10 10:22:15.943236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.010  [2024-12-10T10:22:16.496Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:08:41.269 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:41.269 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:41.269 [2024-12-10 10:22:16.306490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:41.269 [2024-12-10 10:22:16.306596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73993 ] 00:08:41.269 { 00:08:41.269 "subsystems": [ 00:08:41.269 { 00:08:41.269 "subsystem": "bdev", 00:08:41.269 "config": [ 00:08:41.269 { 00:08:41.269 "params": { 00:08:41.269 "trtype": "pcie", 00:08:41.269 "traddr": "0000:00:10.0", 00:08:41.269 "name": "Nvme0" 00:08:41.269 }, 00:08:41.269 "method": "bdev_nvme_attach_controller" 00:08:41.269 }, 00:08:41.269 { 00:08:41.269 "params": { 00:08:41.269 "trtype": "pcie", 00:08:41.269 "traddr": "0000:00:11.0", 00:08:41.269 "name": "Nvme1" 00:08:41.269 }, 00:08:41.269 "method": "bdev_nvme_attach_controller" 00:08:41.269 }, 00:08:41.269 { 00:08:41.269 "method": "bdev_wait_for_examine" 00:08:41.269 } 00:08:41.269 ] 00:08:41.269 } 00:08:41.269 ] 00:08:41.269 } 00:08:41.269 [2024-12-10 10:22:16.444862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.269 [2024-12-10 10:22:16.478877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.528 [2024-12-10 10:22:16.507023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.528  [2024-12-10T10:22:17.014Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:41.787 00:08:41.787 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:41.787 00:08:41.787 real 0m6.073s 00:08:41.787 user 0m4.470s 00:08:41.787 sys 0m2.864s 00:08:41.787 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.787 10:22:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:41.787 ************************************ 00:08:41.787 END TEST spdk_dd_bdev_to_bdev 00:08:41.787 ************************************ 00:08:41.787 10:22:16 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:41.787 10:22:16 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:41.787 10:22:16 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.787 10:22:16 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.787 10:22:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:41.787 ************************************ 00:08:41.787 START TEST spdk_dd_uring 00:08:41.787 ************************************ 00:08:41.787 10:22:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:41.787 * Looking for test storage... 00:08:41.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:41.787 10:22:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.787 10:22:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.787 10:22:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.047 --rc genhtml_branch_coverage=1 00:08:42.047 --rc genhtml_function_coverage=1 00:08:42.047 --rc genhtml_legend=1 00:08:42.047 --rc geninfo_all_blocks=1 00:08:42.047 --rc geninfo_unexecuted_blocks=1 00:08:42.047 00:08:42.047 ' 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.047 --rc genhtml_branch_coverage=1 00:08:42.047 --rc genhtml_function_coverage=1 00:08:42.047 --rc genhtml_legend=1 00:08:42.047 --rc geninfo_all_blocks=1 00:08:42.047 --rc geninfo_unexecuted_blocks=1 00:08:42.047 00:08:42.047 ' 00:08:42.047 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.047 --rc genhtml_branch_coverage=1 00:08:42.047 --rc genhtml_function_coverage=1 00:08:42.047 --rc genhtml_legend=1 00:08:42.047 --rc geninfo_all_blocks=1 00:08:42.047 --rc geninfo_unexecuted_blocks=1 00:08:42.047 00:08:42.047 ' 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.048 --rc genhtml_branch_coverage=1 00:08:42.048 --rc genhtml_function_coverage=1 00:08:42.048 --rc genhtml_legend=1 00:08:42.048 --rc geninfo_all_blocks=1 00:08:42.048 --rc geninfo_unexecuted_blocks=1 00:08:42.048 00:08:42.048 ' 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:42.048 ************************************ 00:08:42.048 START TEST dd_uring_copy 00:08:42.048 ************************************ 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=z4g5tfroun20ez7w9m745skxjw0a154v0q1aap064aoj87dcio2sch2yuj28izyiu48wmyih35ay67j9mezly5iua3wolijcxywwcrrp6du9hy93nro4ylrj05wjb5e4dt49pqrjoljyimtxgravamxvp6maabkuef5x4qo20qtgm0zvh3jme3xod3ibc3iyc2xrvt9wlq02brijfe9sch4s9ra00liv08odqtk8exnl85k32pt7mazjees4i69px12syfdon3mk6b8obn1lkddjdkb4cyb4q803d2kzyrbf8wvxodmdyl6adbmamcbpkupim6rzi8bm0gfhfh692519lhbmbau4sa6r82vmn2dkbh1z3sm7z3hkr31loulavkucxoor0cz2mjcia8uuclw9etbi0g683965lunni7iyt92a7kp55fpka6l817s55rfq9989io8ywtvsi3k8wc8vmyz8ox8swdilcj7plm2l7wewdekh2anutlrrh47k95gefk5h1w3bgjkdbxk5dam05ae6dycsjezan4h3mrdwwwlz1p7dfus6dqnjh5y0eppiorsvqk4vi6rpckz0j6qjooqk8hbjtxsjmswnuqc42hkbotqgxlb0epz2dpqt8gnhmgfiuauej4lzhwtb76ut8wocnl91e6tz2wsax513qd3yh7eyif1ylgyz5d70ax2zpsfmmk7f3zdwr0nf0m6fgzeb06s5gdalejd99s64wam2h1hluss4rusdynynrno2tq1qvcnjubu7aya3lm6kw4z71qfaxyc39no78frtgi34hdonxsdg0n35s6da658845ocoepelwyujegzkdtdokkczjvdu0h0rum859rw4ljzra56unk3fylae5ap1lljl53j5zq6b5wy4cgd5f3644l8p2lfn47w8wnl4g8zpzee3qkawwbonh4k8aifqs7oypnzznn9s2fakhzr712zgxhxczfb2476htdepfwfoxc96u9omcladwz6bslv 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo z4g5tfroun20ez7w9m745skxjw0a154v0q1aap064aoj87dcio2sch2yuj28izyiu48wmyih35ay67j9mezly5iua3wolijcxywwcrrp6du9hy93nro4ylrj05wjb5e4dt49pqrjoljyimtxgravamxvp6maabkuef5x4qo20qtgm0zvh3jme3xod3ibc3iyc2xrvt9wlq02brijfe9sch4s9ra00liv08odqtk8exnl85k32pt7mazjees4i69px12syfdon3mk6b8obn1lkddjdkb4cyb4q803d2kzyrbf8wvxodmdyl6adbmamcbpkupim6rzi8bm0gfhfh692519lhbmbau4sa6r82vmn2dkbh1z3sm7z3hkr31loulavkucxoor0cz2mjcia8uuclw9etbi0g683965lunni7iyt92a7kp55fpka6l817s55rfq9989io8ywtvsi3k8wc8vmyz8ox8swdilcj7plm2l7wewdekh2anutlrrh47k95gefk5h1w3bgjkdbxk5dam05ae6dycsjezan4h3mrdwwwlz1p7dfus6dqnjh5y0eppiorsvqk4vi6rpckz0j6qjooqk8hbjtxsjmswnuqc42hkbotqgxlb0epz2dpqt8gnhmgfiuauej4lzhwtb76ut8wocnl91e6tz2wsax513qd3yh7eyif1ylgyz5d70ax2zpsfmmk7f3zdwr0nf0m6fgzeb06s5gdalejd99s64wam2h1hluss4rusdynynrno2tq1qvcnjubu7aya3lm6kw4z71qfaxyc39no78frtgi34hdonxsdg0n35s6da658845ocoepelwyujegzkdtdokkczjvdu0h0rum859rw4ljzra56unk3fylae5ap1lljl53j5zq6b5wy4cgd5f3644l8p2lfn47w8wnl4g8zpzee3qkawwbonh4k8aifqs7oypnzznn9s2fakhzr712zgxhxczfb2476htdepfwfoxc96u9omcladwz6bslv 00:08:42.048 10:22:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:42.048 [2024-12-10 10:22:17.141208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.048 [2024-12-10 10:22:17.141292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74070 ] 00:08:42.048 [2024-12-10 10:22:17.271661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.307 [2024-12-10 10:22:17.304902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.307 [2024-12-10 10:22:17.335145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.875  [2024-12-10T10:22:18.102Z] Copying: 511/511 [MB] (average 1395 MBps) 00:08:42.875 00:08:42.875 10:22:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:42.875 10:22:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:42.875 10:22:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:42.875 10:22:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:43.135 [2024-12-10 10:22:18.110426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:43.135 [2024-12-10 10:22:18.110531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74082 ] 00:08:43.135 { 00:08:43.135 "subsystems": [ 00:08:43.135 { 00:08:43.135 "subsystem": "bdev", 00:08:43.135 "config": [ 00:08:43.135 { 00:08:43.135 "params": { 00:08:43.135 "block_size": 512, 00:08:43.135 "num_blocks": 1048576, 00:08:43.135 "name": "malloc0" 00:08:43.135 }, 00:08:43.135 "method": "bdev_malloc_create" 00:08:43.135 }, 00:08:43.135 { 00:08:43.135 "params": { 00:08:43.135 "filename": "/dev/zram1", 00:08:43.135 "name": "uring0" 00:08:43.135 }, 00:08:43.135 "method": "bdev_uring_create" 00:08:43.135 }, 00:08:43.135 { 00:08:43.135 "method": "bdev_wait_for_examine" 00:08:43.135 } 00:08:43.135 ] 00:08:43.135 } 00:08:43.135 ] 00:08:43.135 } 00:08:43.135 [2024-12-10 10:22:18.248900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.135 [2024-12-10 10:22:18.279444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.135 [2024-12-10 10:22:18.305925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.512  [2024-12-10T10:22:20.725Z] Copying: 227/512 [MB] (227 MBps) [2024-12-10T10:22:20.725Z] Copying: 471/512 [MB] (243 MBps) [2024-12-10T10:22:20.984Z] Copying: 512/512 [MB] (average 236 MBps) 00:08:45.757 00:08:45.757 10:22:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:45.757 10:22:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:45.757 10:22:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:45.757 10:22:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:45.757 [2024-12-10 10:22:20.846940] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.757 [2024-12-10 10:22:20.847039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74126 ] 00:08:45.757 { 00:08:45.757 "subsystems": [ 00:08:45.757 { 00:08:45.757 "subsystem": "bdev", 00:08:45.757 "config": [ 00:08:45.757 { 00:08:45.757 "params": { 00:08:45.757 "block_size": 512, 00:08:45.757 "num_blocks": 1048576, 00:08:45.757 "name": "malloc0" 00:08:45.757 }, 00:08:45.757 "method": "bdev_malloc_create" 00:08:45.757 }, 00:08:45.757 { 00:08:45.757 "params": { 00:08:45.757 "filename": "/dev/zram1", 00:08:45.757 "name": "uring0" 00:08:45.757 }, 00:08:45.757 "method": "bdev_uring_create" 00:08:45.757 }, 00:08:45.757 { 00:08:45.757 "method": "bdev_wait_for_examine" 00:08:45.757 } 00:08:45.757 ] 00:08:45.757 } 00:08:45.757 ] 00:08:45.757 } 00:08:45.757 [2024-12-10 10:22:20.978766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.015 [2024-12-10 10:22:21.010803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.015 [2024-12-10 10:22:21.037159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.952  [2024-12-10T10:22:23.554Z] Copying: 182/512 [MB] (182 MBps) [2024-12-10T10:22:24.490Z] Copying: 337/512 [MB] (154 MBps) [2024-12-10T10:22:24.490Z] Copying: 498/512 [MB] (160 MBps) [2024-12-10T10:22:24.490Z] Copying: 512/512 [MB] (average 166 MBps) 00:08:49.263 00:08:49.522 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:49.522 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ z4g5tfroun20ez7w9m745skxjw0a154v0q1aap064aoj87dcio2sch2yuj28izyiu48wmyih35ay67j9mezly5iua3wolijcxywwcrrp6du9hy93nro4ylrj05wjb5e4dt49pqrjoljyimtxgravamxvp6maabkuef5x4qo20qtgm0zvh3jme3xod3ibc3iyc2xrvt9wlq02brijfe9sch4s9ra00liv08odqtk8exnl85k32pt7mazjees4i69px12syfdon3mk6b8obn1lkddjdkb4cyb4q803d2kzyrbf8wvxodmdyl6adbmamcbpkupim6rzi8bm0gfhfh692519lhbmbau4sa6r82vmn2dkbh1z3sm7z3hkr31loulavkucxoor0cz2mjcia8uuclw9etbi0g683965lunni7iyt92a7kp55fpka6l817s55rfq9989io8ywtvsi3k8wc8vmyz8ox8swdilcj7plm2l7wewdekh2anutlrrh47k95gefk5h1w3bgjkdbxk5dam05ae6dycsjezan4h3mrdwwwlz1p7dfus6dqnjh5y0eppiorsvqk4vi6rpckz0j6qjooqk8hbjtxsjmswnuqc42hkbotqgxlb0epz2dpqt8gnhmgfiuauej4lzhwtb76ut8wocnl91e6tz2wsax513qd3yh7eyif1ylgyz5d70ax2zpsfmmk7f3zdwr0nf0m6fgzeb06s5gdalejd99s64wam2h1hluss4rusdynynrno2tq1qvcnjubu7aya3lm6kw4z71qfaxyc39no78frtgi34hdonxsdg0n35s6da658845ocoepelwyujegzkdtdokkczjvdu0h0rum859rw4ljzra56unk3fylae5ap1lljl53j5zq6b5wy4cgd5f3644l8p2lfn47w8wnl4g8zpzee3qkawwbonh4k8aifqs7oypnzznn9s2fakhzr712zgxhxczfb2476htdepfwfoxc96u9omcladwz6bslv == \z\4\g\5\t\f\r\o\u\n\2\0\e\z\7\w\9\m\7\4\5\s\k\x\j\w\0\a\1\5\4\v\0\q\1\a\a\p\0\6\4\a\o\j\8\7\d\c\i\o\2\s\c\h\2\y\u\j\2\8\i\z\y\i\u\4\8\w\m\y\i\h\3\5\a\y\6\7\j\9\m\e\z\l\y\5\i\u\a\3\w\o\l\i\j\c\x\y\w\w\c\r\r\p\6\d\u\9\h\y\9\3\n\r\o\4\y\l\r\j\0\5\w\j\b\5\e\4\d\t\4\9\p\q\r\j\o\l\j\y\i\m\t\x\g\r\a\v\a\m\x\v\p\6\m\a\a\b\k\u\e\f\5\x\4\q\o\2\0\q\t\g\m\0\z\v\h\3\j\m\e\3\x\o\d\3\i\b\c\3\i\y\c\2\x\r\v\t\9\w\l\q\0\2\b\r\i\j\f\e\9\s\c\h\4\s\9\r\a\0\0\l\i\v\0\8\o\d\q\t\k\8\e\x\n\l\8\5\k\3\2\p\t\7\m\a\z\j\e\e\s\4\i\6\9\p\x\1\2\s\y\f\d\o\n\3\m\k\6\b\8\o\b\n\1\l\k\d\d\j\d\k\b\4\c\y\b\4\q\8\0\3\d\2\k\z\y\r\b\f\8\w\v\x\o\d\m\d\y\l\6\a\d\b\m\a\m\c\b\p\k\u\p\i\m\6\r\z\i\8\b\m\0\g\f\h\f\h\6\9\2\5\1\9\l\h\b\m\b\a\u\4\s\a\6\r\8\2\v\m\n\2\d\k\b\h\1\z\3\s\m\7\z\3\h\k\r\3\1\l\o\u\l\a\v\k\u\c\x\o\o\r\0\c\z\2\m\j\c\i\a\8\u\u\c\l\w\9\e\t\b\i\0\g\6\8\3\9\6\5\l\u\n\n\i\7\i\y\t\9\2\a\7\k\p\5\5\f\p\k\a\6\l\8\1\7\s\5\5\r\f\q\9\9\8\9\i\o\8\y\w\t\v\s\i\3\k\8\w\c\8\v\m\y\z\8\o\x\8\s\w\d\i\l\c\j\7\p\l\m\2\l\7\w\e\w\d\e\k\h\2\a\n\u\t\l\r\r\h\4\7\k\9\5\g\e\f\k\5\h\1\w\3\b\g\j\k\d\b\x\k\5\d\a\m\0\5\a\e\6\d\y\c\s\j\e\z\a\n\4\h\3\m\r\d\w\w\w\l\z\1\p\7\d\f\u\s\6\d\q\n\j\h\5\y\0\e\p\p\i\o\r\s\v\q\k\4\v\i\6\r\p\c\k\z\0\j\6\q\j\o\o\q\k\8\h\b\j\t\x\s\j\m\s\w\n\u\q\c\4\2\h\k\b\o\t\q\g\x\l\b\0\e\p\z\2\d\p\q\t\8\g\n\h\m\g\f\i\u\a\u\e\j\4\l\z\h\w\t\b\7\6\u\t\8\w\o\c\n\l\9\1\e\6\t\z\2\w\s\a\x\5\1\3\q\d\3\y\h\7\e\y\i\f\1\y\l\g\y\z\5\d\7\0\a\x\2\z\p\s\f\m\m\k\7\f\3\z\d\w\r\0\n\f\0\m\6\f\g\z\e\b\0\6\s\5\g\d\a\l\e\j\d\9\9\s\6\4\w\a\m\2\h\1\h\l\u\s\s\4\r\u\s\d\y\n\y\n\r\n\o\2\t\q\1\q\v\c\n\j\u\b\u\7\a\y\a\3\l\m\6\k\w\4\z\7\1\q\f\a\x\y\c\3\9\n\o\7\8\f\r\t\g\i\3\4\h\d\o\n\x\s\d\g\0\n\3\5\s\6\d\a\6\5\8\8\4\5\o\c\o\e\p\e\l\w\y\u\j\e\g\z\k\d\t\d\o\k\k\c\z\j\v\d\u\0\h\0\r\u\m\8\5\9\r\w\4\l\j\z\r\a\5\6\u\n\k\3\f\y\l\a\e\5\a\p\1\l\l\j\l\5\3\j\5\z\q\6\b\5\w\y\4\c\g\d\5\f\3\6\4\4\l\8\p\2\l\f\n\4\7\w\8\w\n\l\4\g\8\z\p\z\e\e\3\q\k\a\w\w\b\o\n\h\4\k\8\a\i\f\q\s\7\o\y\p\n\z\z\n\n\9\s\2\f\a\k\h\z\r\7\1\2\z\g\x\h\x\c\z\f\b\2\4\7\6\h\t\d\e\p\f\w\f\o\x\c\9\6\u\9\o\m\c\l\a\d\w\z\6\b\s\l\v ]] 00:08:49.522 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:49.522 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ z4g5tfroun20ez7w9m745skxjw0a154v0q1aap064aoj87dcio2sch2yuj28izyiu48wmyih35ay67j9mezly5iua3wolijcxywwcrrp6du9hy93nro4ylrj05wjb5e4dt49pqrjoljyimtxgravamxvp6maabkuef5x4qo20qtgm0zvh3jme3xod3ibc3iyc2xrvt9wlq02brijfe9sch4s9ra00liv08odqtk8exnl85k32pt7mazjees4i69px12syfdon3mk6b8obn1lkddjdkb4cyb4q803d2kzyrbf8wvxodmdyl6adbmamcbpkupim6rzi8bm0gfhfh692519lhbmbau4sa6r82vmn2dkbh1z3sm7z3hkr31loulavkucxoor0cz2mjcia8uuclw9etbi0g683965lunni7iyt92a7kp55fpka6l817s55rfq9989io8ywtvsi3k8wc8vmyz8ox8swdilcj7plm2l7wewdekh2anutlrrh47k95gefk5h1w3bgjkdbxk5dam05ae6dycsjezan4h3mrdwwwlz1p7dfus6dqnjh5y0eppiorsvqk4vi6rpckz0j6qjooqk8hbjtxsjmswnuqc42hkbotqgxlb0epz2dpqt8gnhmgfiuauej4lzhwtb76ut8wocnl91e6tz2wsax513qd3yh7eyif1ylgyz5d70ax2zpsfmmk7f3zdwr0nf0m6fgzeb06s5gdalejd99s64wam2h1hluss4rusdynynrno2tq1qvcnjubu7aya3lm6kw4z71qfaxyc39no78frtgi34hdonxsdg0n35s6da658845ocoepelwyujegzkdtdokkczjvdu0h0rum859rw4ljzra56unk3fylae5ap1lljl53j5zq6b5wy4cgd5f3644l8p2lfn47w8wnl4g8zpzee3qkawwbonh4k8aifqs7oypnzznn9s2fakhzr712zgxhxczfb2476htdepfwfoxc96u9omcladwz6bslv == \z\4\g\5\t\f\r\o\u\n\2\0\e\z\7\w\9\m\7\4\5\s\k\x\j\w\0\a\1\5\4\v\0\q\1\a\a\p\0\6\4\a\o\j\8\7\d\c\i\o\2\s\c\h\2\y\u\j\2\8\i\z\y\i\u\4\8\w\m\y\i\h\3\5\a\y\6\7\j\9\m\e\z\l\y\5\i\u\a\3\w\o\l\i\j\c\x\y\w\w\c\r\r\p\6\d\u\9\h\y\9\3\n\r\o\4\y\l\r\j\0\5\w\j\b\5\e\4\d\t\4\9\p\q\r\j\o\l\j\y\i\m\t\x\g\r\a\v\a\m\x\v\p\6\m\a\a\b\k\u\e\f\5\x\4\q\o\2\0\q\t\g\m\0\z\v\h\3\j\m\e\3\x\o\d\3\i\b\c\3\i\y\c\2\x\r\v\t\9\w\l\q\0\2\b\r\i\j\f\e\9\s\c\h\4\s\9\r\a\0\0\l\i\v\0\8\o\d\q\t\k\8\e\x\n\l\8\5\k\3\2\p\t\7\m\a\z\j\e\e\s\4\i\6\9\p\x\1\2\s\y\f\d\o\n\3\m\k\6\b\8\o\b\n\1\l\k\d\d\j\d\k\b\4\c\y\b\4\q\8\0\3\d\2\k\z\y\r\b\f\8\w\v\x\o\d\m\d\y\l\6\a\d\b\m\a\m\c\b\p\k\u\p\i\m\6\r\z\i\8\b\m\0\g\f\h\f\h\6\9\2\5\1\9\l\h\b\m\b\a\u\4\s\a\6\r\8\2\v\m\n\2\d\k\b\h\1\z\3\s\m\7\z\3\h\k\r\3\1\l\o\u\l\a\v\k\u\c\x\o\o\r\0\c\z\2\m\j\c\i\a\8\u\u\c\l\w\9\e\t\b\i\0\g\6\8\3\9\6\5\l\u\n\n\i\7\i\y\t\9\2\a\7\k\p\5\5\f\p\k\a\6\l\8\1\7\s\5\5\r\f\q\9\9\8\9\i\o\8\y\w\t\v\s\i\3\k\8\w\c\8\v\m\y\z\8\o\x\8\s\w\d\i\l\c\j\7\p\l\m\2\l\7\w\e\w\d\e\k\h\2\a\n\u\t\l\r\r\h\4\7\k\9\5\g\e\f\k\5\h\1\w\3\b\g\j\k\d\b\x\k\5\d\a\m\0\5\a\e\6\d\y\c\s\j\e\z\a\n\4\h\3\m\r\d\w\w\w\l\z\1\p\7\d\f\u\s\6\d\q\n\j\h\5\y\0\e\p\p\i\o\r\s\v\q\k\4\v\i\6\r\p\c\k\z\0\j\6\q\j\o\o\q\k\8\h\b\j\t\x\s\j\m\s\w\n\u\q\c\4\2\h\k\b\o\t\q\g\x\l\b\0\e\p\z\2\d\p\q\t\8\g\n\h\m\g\f\i\u\a\u\e\j\4\l\z\h\w\t\b\7\6\u\t\8\w\o\c\n\l\9\1\e\6\t\z\2\w\s\a\x\5\1\3\q\d\3\y\h\7\e\y\i\f\1\y\l\g\y\z\5\d\7\0\a\x\2\z\p\s\f\m\m\k\7\f\3\z\d\w\r\0\n\f\0\m\6\f\g\z\e\b\0\6\s\5\g\d\a\l\e\j\d\9\9\s\6\4\w\a\m\2\h\1\h\l\u\s\s\4\r\u\s\d\y\n\y\n\r\n\o\2\t\q\1\q\v\c\n\j\u\b\u\7\a\y\a\3\l\m\6\k\w\4\z\7\1\q\f\a\x\y\c\3\9\n\o\7\8\f\r\t\g\i\3\4\h\d\o\n\x\s\d\g\0\n\3\5\s\6\d\a\6\5\8\8\4\5\o\c\o\e\p\e\l\w\y\u\j\e\g\z\k\d\t\d\o\k\k\c\z\j\v\d\u\0\h\0\r\u\m\8\5\9\r\w\4\l\j\z\r\a\5\6\u\n\k\3\f\y\l\a\e\5\a\p\1\l\l\j\l\5\3\j\5\z\q\6\b\5\w\y\4\c\g\d\5\f\3\6\4\4\l\8\p\2\l\f\n\4\7\w\8\w\n\l\4\g\8\z\p\z\e\e\3\q\k\a\w\w\b\o\n\h\4\k\8\a\i\f\q\s\7\o\y\p\n\z\z\n\n\9\s\2\f\a\k\h\z\r\7\1\2\z\g\x\h\x\c\z\f\b\2\4\7\6\h\t\d\e\p\f\w\f\o\x\c\9\6\u\9\o\m\c\l\a\d\w\z\6\b\s\l\v ]] 00:08:49.522 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:49.781 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:49.781 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:49.781 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:49.781 10:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:49.781 [2024-12-10 10:22:24.947406] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.781 [2024-12-10 10:22:24.947558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:08:49.781 { 00:08:49.781 "subsystems": [ 00:08:49.781 { 00:08:49.781 "subsystem": "bdev", 00:08:49.781 "config": [ 00:08:49.781 { 00:08:49.781 "params": { 00:08:49.781 "block_size": 512, 00:08:49.781 "num_blocks": 1048576, 00:08:49.781 "name": "malloc0" 00:08:49.781 }, 00:08:49.781 "method": "bdev_malloc_create" 00:08:49.781 }, 00:08:49.781 { 00:08:49.781 "params": { 00:08:49.781 "filename": "/dev/zram1", 00:08:49.781 "name": "uring0" 00:08:49.781 }, 00:08:49.781 "method": "bdev_uring_create" 00:08:49.781 }, 00:08:49.781 { 00:08:49.781 "method": "bdev_wait_for_examine" 00:08:49.781 } 00:08:49.781 ] 00:08:49.781 } 00:08:49.781 ] 00:08:49.781 } 00:08:50.041 [2024-12-10 10:22:25.085538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.041 [2024-12-10 10:22:25.124809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.041 [2024-12-10 10:22:25.156875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.417  [2024-12-10T10:22:27.581Z] Copying: 142/512 [MB] (142 MBps) [2024-12-10T10:22:28.517Z] Copying: 286/512 [MB] (144 MBps) [2024-12-10T10:22:29.085Z] Copying: 430/512 [MB] (144 MBps) [2024-12-10T10:22:29.344Z] Copying: 512/512 [MB] (average 143 MBps) 00:08:54.117 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:54.117 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:54.117 { 00:08:54.117 "subsystems": [ 00:08:54.117 { 00:08:54.117 "subsystem": "bdev", 00:08:54.117 "config": [ 00:08:54.117 { 00:08:54.117 "params": { 00:08:54.117 "block_size": 512, 00:08:54.117 "num_blocks": 1048576, 00:08:54.117 "name": "malloc0" 00:08:54.117 }, 00:08:54.117 "method": "bdev_malloc_create" 00:08:54.117 }, 00:08:54.117 { 00:08:54.117 "params": { 00:08:54.117 "filename": "/dev/zram1", 00:08:54.117 "name": "uring0" 00:08:54.117 }, 00:08:54.117 "method": "bdev_uring_create" 00:08:54.117 }, 00:08:54.117 { 00:08:54.117 "params": { 00:08:54.117 "name": "uring0" 00:08:54.117 }, 00:08:54.117 "method": "bdev_uring_delete" 00:08:54.117 }, 00:08:54.117 { 00:08:54.117 "method": "bdev_wait_for_examine" 00:08:54.117 } 00:08:54.117 ] 00:08:54.117 } 00:08:54.117 ] 00:08:54.117 } 00:08:54.117 [2024-12-10 10:22:29.181143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:54.117 [2024-12-10 10:22:29.181322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74247 ] 00:08:54.117 [2024-12-10 10:22:29.318374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.376 [2024-12-10 10:22:29.357341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.376 [2024-12-10 10:22:29.389794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.376  [2024-12-10T10:22:29.861Z] Copying: 0/0 [B] (average 0 Bps) 00:08:54.634 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.634 10:22:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.634 [2024-12-10 10:22:29.837695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:54.634 [2024-12-10 10:22:29.837803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74272 ] 00:08:54.634 { 00:08:54.634 "subsystems": [ 00:08:54.634 { 00:08:54.634 "subsystem": "bdev", 00:08:54.634 "config": [ 00:08:54.634 { 00:08:54.634 "params": { 00:08:54.634 "block_size": 512, 00:08:54.634 "num_blocks": 1048576, 00:08:54.634 "name": "malloc0" 00:08:54.634 }, 00:08:54.634 "method": "bdev_malloc_create" 00:08:54.634 }, 00:08:54.634 { 00:08:54.634 "params": { 00:08:54.634 "filename": "/dev/zram1", 00:08:54.634 "name": "uring0" 00:08:54.634 }, 00:08:54.634 "method": "bdev_uring_create" 00:08:54.634 }, 00:08:54.634 { 00:08:54.634 "params": { 00:08:54.634 "name": "uring0" 00:08:54.634 }, 00:08:54.634 "method": "bdev_uring_delete" 00:08:54.634 }, 00:08:54.634 { 00:08:54.634 "method": "bdev_wait_for_examine" 00:08:54.634 } 00:08:54.634 ] 00:08:54.634 } 00:08:54.634 ] 00:08:54.634 } 00:08:54.893 [2024-12-10 10:22:29.978328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.893 [2024-12-10 10:22:30.024032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.893 [2024-12-10 10:22:30.060257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.152 [2024-12-10 10:22:30.193320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:55.152 [2024-12-10 10:22:30.193384] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:55.152 [2024-12-10 10:22:30.193422] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:55.152 [2024-12-10 10:22:30.193433] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.152 [2024-12-10 10:22:30.373562] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:55.429 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:55.702 00:08:55.702 real 0m13.667s 00:08:55.702 user 0m9.226s 00:08:55.702 sys 0m11.528s 00:08:55.702 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.702 10:22:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:55.702 ************************************ 00:08:55.702 END TEST dd_uring_copy 00:08:55.702 ************************************ 00:08:55.702 00:08:55.702 real 0m13.908s 00:08:55.702 user 0m9.358s 00:08:55.702 sys 0m11.637s 00:08:55.702 10:22:30 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.702 ************************************ 00:08:55.702 END TEST spdk_dd_uring 00:08:55.702 10:22:30 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:55.702 ************************************ 00:08:55.702 10:22:30 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:55.702 10:22:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.702 10:22:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.702 10:22:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:55.702 ************************************ 00:08:55.702 START TEST spdk_dd_sparse 00:08:55.702 ************************************ 00:08:55.702 10:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:55.702 * Looking for test storage... 00:08:55.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:55.702 10:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:55.702 10:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:55.702 10:22:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.961 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.962 --rc genhtml_branch_coverage=1 00:08:55.962 --rc genhtml_function_coverage=1 00:08:55.962 --rc genhtml_legend=1 00:08:55.962 --rc geninfo_all_blocks=1 00:08:55.962 --rc geninfo_unexecuted_blocks=1 00:08:55.962 00:08:55.962 ' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.962 --rc genhtml_branch_coverage=1 00:08:55.962 --rc genhtml_function_coverage=1 00:08:55.962 --rc genhtml_legend=1 00:08:55.962 --rc geninfo_all_blocks=1 00:08:55.962 --rc geninfo_unexecuted_blocks=1 00:08:55.962 00:08:55.962 ' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.962 --rc genhtml_branch_coverage=1 00:08:55.962 --rc genhtml_function_coverage=1 00:08:55.962 --rc genhtml_legend=1 00:08:55.962 --rc geninfo_all_blocks=1 00:08:55.962 --rc geninfo_unexecuted_blocks=1 00:08:55.962 00:08:55.962 ' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.962 --rc genhtml_branch_coverage=1 00:08:55.962 --rc genhtml_function_coverage=1 00:08:55.962 --rc genhtml_legend=1 00:08:55.962 --rc geninfo_all_blocks=1 00:08:55.962 --rc geninfo_unexecuted_blocks=1 00:08:55.962 00:08:55.962 ' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:55.962 1+0 records in 00:08:55.962 1+0 records out 00:08:55.962 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.004093 s, 1.0 GB/s 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:55.962 1+0 records in 00:08:55.962 1+0 records out 00:08:55.962 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00573179 s, 732 MB/s 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:55.962 1+0 records in 00:08:55.962 1+0 records out 00:08:55.962 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00660186 s, 635 MB/s 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:55.962 ************************************ 00:08:55.962 START TEST dd_sparse_file_to_file 00:08:55.962 ************************************ 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:55.962 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:55.962 [2024-12-10 10:22:31.129245] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:55.962 [2024-12-10 10:22:31.129368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74370 ] 00:08:55.962 { 00:08:55.962 "subsystems": [ 00:08:55.962 { 00:08:55.962 "subsystem": "bdev", 00:08:55.962 "config": [ 00:08:55.962 { 00:08:55.962 "params": { 00:08:55.962 "block_size": 4096, 00:08:55.962 "filename": "dd_sparse_aio_disk", 00:08:55.962 "name": "dd_aio" 00:08:55.962 }, 00:08:55.962 "method": "bdev_aio_create" 00:08:55.962 }, 00:08:55.962 { 00:08:55.962 "params": { 00:08:55.962 "lvs_name": "dd_lvstore", 00:08:55.962 "bdev_name": "dd_aio" 00:08:55.962 }, 00:08:55.962 "method": "bdev_lvol_create_lvstore" 00:08:55.962 }, 00:08:55.962 { 00:08:55.962 "method": "bdev_wait_for_examine" 00:08:55.962 } 00:08:55.962 ] 00:08:55.962 } 00:08:55.962 ] 00:08:55.962 } 00:08:56.221 [2024-12-10 10:22:31.270031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.221 [2024-12-10 10:22:31.315273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.221 [2024-12-10 10:22:31.349685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.221  [2024-12-10T10:22:31.707Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:56.480 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:56.480 00:08:56.480 real 0m0.546s 00:08:56.480 user 0m0.327s 00:08:56.480 sys 0m0.267s 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.480 ************************************ 00:08:56.480 END TEST dd_sparse_file_to_file 00:08:56.480 ************************************ 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:56.480 ************************************ 00:08:56.480 START TEST dd_sparse_file_to_bdev 00:08:56.480 ************************************ 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:56.480 10:22:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:56.739 [2024-12-10 10:22:31.727361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:56.739 [2024-12-10 10:22:31.727472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74407 ] 00:08:56.739 { 00:08:56.739 "subsystems": [ 00:08:56.739 { 00:08:56.739 "subsystem": "bdev", 00:08:56.739 "config": [ 00:08:56.739 { 00:08:56.739 "params": { 00:08:56.739 "block_size": 4096, 00:08:56.739 "filename": "dd_sparse_aio_disk", 00:08:56.739 "name": "dd_aio" 00:08:56.739 }, 00:08:56.739 "method": "bdev_aio_create" 00:08:56.739 }, 00:08:56.739 { 00:08:56.739 "params": { 00:08:56.739 "lvs_name": "dd_lvstore", 00:08:56.739 "lvol_name": "dd_lvol", 00:08:56.739 "size_in_mib": 36, 00:08:56.739 "thin_provision": true 00:08:56.739 }, 00:08:56.739 "method": "bdev_lvol_create" 00:08:56.739 }, 00:08:56.739 { 00:08:56.739 "method": "bdev_wait_for_examine" 00:08:56.739 } 00:08:56.739 ] 00:08:56.739 } 00:08:56.739 ] 00:08:56.739 } 00:08:56.739 [2024-12-10 10:22:31.866676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.739 [2024-12-10 10:22:31.907144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.739 [2024-12-10 10:22:31.939488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.998  [2024-12-10T10:22:32.225Z] Copying: 12/36 [MB] (average 600 MBps) 00:08:56.998 00:08:56.998 00:08:56.998 real 0m0.506s 00:08:56.998 user 0m0.311s 00:08:56.998 sys 0m0.255s 00:08:56.998 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.998 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 ************************************ 00:08:56.998 END TEST dd_sparse_file_to_bdev 00:08:56.998 ************************************ 00:08:56.998 10:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:56.998 10:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.998 10:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.998 10:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:57.258 ************************************ 00:08:57.258 START TEST dd_sparse_bdev_to_file 00:08:57.258 ************************************ 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:57.258 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:57.258 [2024-12-10 10:22:32.286915] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.258 [2024-12-10 10:22:32.287010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74445 ] 00:08:57.258 { 00:08:57.258 "subsystems": [ 00:08:57.258 { 00:08:57.258 "subsystem": "bdev", 00:08:57.258 "config": [ 00:08:57.258 { 00:08:57.258 "params": { 00:08:57.258 "block_size": 4096, 00:08:57.258 "filename": "dd_sparse_aio_disk", 00:08:57.258 "name": "dd_aio" 00:08:57.258 }, 00:08:57.258 "method": "bdev_aio_create" 00:08:57.258 }, 00:08:57.258 { 00:08:57.258 "method": "bdev_wait_for_examine" 00:08:57.258 } 00:08:57.258 ] 00:08:57.258 } 00:08:57.258 ] 00:08:57.258 } 00:08:57.258 [2024-12-10 10:22:32.425750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.258 [2024-12-10 10:22:32.474091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.517 [2024-12-10 10:22:32.513010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.517  [2024-12-10T10:22:32.744Z] Copying: 12/36 [MB] (average 857 MBps) 00:08:57.517 00:08:57.517 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:57.517 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:57.517 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:57.776 00:08:57.776 real 0m0.527s 00:08:57.776 user 0m0.319s 00:08:57.776 sys 0m0.269s 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:57.776 ************************************ 00:08:57.776 END TEST dd_sparse_bdev_to_file 00:08:57.776 ************************************ 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:57.776 00:08:57.776 real 0m1.993s 00:08:57.776 user 0m1.141s 00:08:57.776 sys 0m1.011s 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.776 10:22:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:57.776 ************************************ 00:08:57.776 END TEST spdk_dd_sparse 00:08:57.776 ************************************ 00:08:57.776 10:22:32 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:57.776 10:22:32 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.776 10:22:32 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.776 10:22:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:57.776 ************************************ 00:08:57.776 START TEST spdk_dd_negative 00:08:57.776 ************************************ 00:08:57.776 10:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:57.776 * Looking for test storage... 00:08:57.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:57.776 10:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:57.776 10:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:57.776 10:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.036 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:58.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.037 --rc genhtml_branch_coverage=1 00:08:58.037 --rc genhtml_function_coverage=1 00:08:58.037 --rc genhtml_legend=1 00:08:58.037 --rc geninfo_all_blocks=1 00:08:58.037 --rc geninfo_unexecuted_blocks=1 00:08:58.037 00:08:58.037 ' 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:58.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.037 --rc genhtml_branch_coverage=1 00:08:58.037 --rc genhtml_function_coverage=1 00:08:58.037 --rc genhtml_legend=1 00:08:58.037 --rc geninfo_all_blocks=1 00:08:58.037 --rc geninfo_unexecuted_blocks=1 00:08:58.037 00:08:58.037 ' 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:58.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.037 --rc genhtml_branch_coverage=1 00:08:58.037 --rc genhtml_function_coverage=1 00:08:58.037 --rc genhtml_legend=1 00:08:58.037 --rc geninfo_all_blocks=1 00:08:58.037 --rc geninfo_unexecuted_blocks=1 00:08:58.037 00:08:58.037 ' 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:58.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.037 --rc genhtml_branch_coverage=1 00:08:58.037 --rc genhtml_function_coverage=1 00:08:58.037 --rc genhtml_legend=1 00:08:58.037 --rc geninfo_all_blocks=1 00:08:58.037 --rc geninfo_unexecuted_blocks=1 00:08:58.037 00:08:58.037 ' 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.037 ************************************ 00:08:58.037 START TEST dd_invalid_arguments 00:08:58.037 ************************************ 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.037 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.037 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:58.037 00:08:58.037 CPU options: 00:08:58.037 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:58.037 (like [0,1,10]) 00:08:58.037 --lcores lcore to CPU mapping list. The list is in the format: 00:08:58.037 [<,lcores[@CPUs]>...] 00:08:58.037 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:58.037 Within the group, '-' is used for range separator, 00:08:58.037 ',' is used for single number separator. 00:08:58.037 '( )' can be omitted for single element group, 00:08:58.037 '@' can be omitted if cpus and lcores have the same value 00:08:58.037 --disable-cpumask-locks Disable CPU core lock files. 00:08:58.037 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:58.037 pollers in the app support interrupt mode) 00:08:58.037 -p, --main-core main (primary) core for DPDK 00:08:58.037 00:08:58.037 Configuration options: 00:08:58.037 -c, --config, --json JSON config file 00:08:58.037 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:58.037 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:58.037 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:58.037 --rpcs-allowed comma-separated list of permitted RPCS 00:08:58.037 --json-ignore-init-errors don't exit on invalid config entry 00:08:58.037 00:08:58.037 Memory options: 00:08:58.037 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:58.037 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:58.037 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:58.037 -R, --huge-unlink unlink huge files after initialization 00:08:58.037 -n, --mem-channels number of memory channels used for DPDK 00:08:58.037 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:58.037 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:58.037 --no-huge run without using hugepages 00:08:58.037 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:58.037 -i, --shm-id shared memory ID (optional) 00:08:58.037 -g, --single-file-segments force creating just one hugetlbfs file 00:08:58.037 00:08:58.037 PCI options: 00:08:58.037 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:58.037 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:58.037 -u, --no-pci disable PCI access 00:08:58.037 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:58.037 00:08:58.037 Log options: 00:08:58.037 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:58.037 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:58.037 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:58.037 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:58.037 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:08:58.037 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:08:58.037 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:08:58.037 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:08:58.037 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:58.037 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:08:58.037 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:08:58.037 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:08:58.038 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:58.038 --silence-noticelog disable notice level logging to stderr 00:08:58.038 00:08:58.038 Trace options: 00:08:58.038 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:58.038 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:58.038 [2024-12-10 10:22:33.153880] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:58.038 setting 0 to disable trace (default 32768) 00:08:58.038 Tracepoints vary in size and can use more than one trace entry. 00:08:58.038 -e, --tpoint-group [:] 00:08:58.038 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:08:58.038 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:08:58.038 blob, bdev_raid, all). 00:08:58.038 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:58.038 a tracepoint group. First tpoint inside a group can be enabled by 00:08:58.038 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:58.038 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:58.038 in /include/spdk_internal/trace_defs.h 00:08:58.038 00:08:58.038 Other options: 00:08:58.038 -h, --help show this usage 00:08:58.038 -v, --version print SPDK version 00:08:58.038 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:58.038 --env-context Opaque context for use of the env implementation 00:08:58.038 00:08:58.038 Application specific: 00:08:58.038 [--------- DD Options ---------] 00:08:58.038 --if Input file. Must specify either --if or --ib. 00:08:58.038 --ib Input bdev. Must specifier either --if or --ib 00:08:58.038 --of Output file. Must specify either --of or --ob. 00:08:58.038 --ob Output bdev. Must specify either --of or --ob. 00:08:58.038 --iflag Input file flags. 00:08:58.038 --oflag Output file flags. 00:08:58.038 --bs I/O unit size (default: 4096) 00:08:58.038 --qd Queue depth (default: 2) 00:08:58.038 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:58.038 --skip Skip this many I/O units at start of input. (default: 0) 00:08:58.038 --seek Skip this many I/O units at start of output. (default: 0) 00:08:58.038 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:58.038 --sparse Enable hole skipping in input target 00:08:58.038 Available iflag and oflag values: 00:08:58.038 append - append mode 00:08:58.038 direct - use direct I/O for data 00:08:58.038 directory - fail unless a directory 00:08:58.038 dsync - use synchronized I/O for data 00:08:58.038 noatime - do not update access time 00:08:58.038 noctty - do not assign controlling terminal from file 00:08:58.038 nofollow - do not follow symlinks 00:08:58.038 nonblock - use non-blocking I/O 00:08:58.038 sync - use synchronized I/O for data and metadata 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.038 00:08:58.038 real 0m0.084s 00:08:58.038 user 0m0.051s 00:08:58.038 sys 0m0.032s 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.038 ************************************ 00:08:58.038 END TEST dd_invalid_arguments 00:08:58.038 ************************************ 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 ************************************ 00:08:58.038 START TEST dd_double_input 00:08:58.038 ************************************ 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.038 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.297 [2024-12-10 10:22:33.287366] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.297 00:08:58.297 real 0m0.081s 00:08:58.297 user 0m0.047s 00:08:58.297 sys 0m0.033s 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:58.297 ************************************ 00:08:58.297 END TEST dd_double_input 00:08:58.297 ************************************ 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.297 ************************************ 00:08:58.297 START TEST dd_double_output 00:08:58.297 ************************************ 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.297 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.298 [2024-12-10 10:22:33.425134] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.298 00:08:58.298 real 0m0.084s 00:08:58.298 user 0m0.057s 00:08:58.298 sys 0m0.026s 00:08:58.298 ************************************ 00:08:58.298 END TEST dd_double_output 00:08:58.298 ************************************ 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.298 ************************************ 00:08:58.298 START TEST dd_no_input 00:08:58.298 ************************************ 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.298 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.558 [2024-12-10 10:22:33.564238] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.558 00:08:58.558 real 0m0.084s 00:08:58.558 user 0m0.055s 00:08:58.558 sys 0m0.027s 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:58.558 ************************************ 00:08:58.558 END TEST dd_no_input 00:08:58.558 ************************************ 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.558 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.559 ************************************ 00:08:58.559 START TEST dd_no_output 00:08:58.559 ************************************ 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.559 [2024-12-10 10:22:33.701071] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.559 00:08:58.559 real 0m0.081s 00:08:58.559 user 0m0.049s 00:08:58.559 sys 0m0.032s 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.559 ************************************ 00:08:58.559 END TEST dd_no_output 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:58.559 ************************************ 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.559 ************************************ 00:08:58.559 START TEST dd_wrong_blocksize 00:08:58.559 ************************************ 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.559 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:58.818 [2024-12-10 10:22:33.836770] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.818 00:08:58.818 real 0m0.080s 00:08:58.818 user 0m0.050s 00:08:58.818 sys 0m0.029s 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:58.818 ************************************ 00:08:58.818 END TEST dd_wrong_blocksize 00:08:58.818 ************************************ 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.818 ************************************ 00:08:58.818 START TEST dd_smaller_blocksize 00:08:58.818 ************************************ 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.818 10:22:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:58.818 [2024-12-10 10:22:33.965416] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:58.818 [2024-12-10 10:22:33.965504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74670 ] 00:08:59.078 [2024-12-10 10:22:34.106328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.078 [2024-12-10 10:22:34.149334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.078 [2024-12-10 10:22:34.184168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.078 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:59.078 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:59.078 [2024-12-10 10:22:34.203491] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:59.078 [2024-12-10 10:22:34.203522] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.078 [2024-12-10 10:22:34.274116] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.337 00:08:59.337 real 0m0.443s 00:08:59.337 user 0m0.231s 00:08:59.337 sys 0m0.107s 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:59.337 ************************************ 00:08:59.337 END TEST dd_smaller_blocksize 00:08:59.337 ************************************ 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.337 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.337 ************************************ 00:08:59.337 START TEST dd_invalid_count 00:08:59.337 ************************************ 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:59.338 [2024-12-10 10:22:34.469183] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.338 00:08:59.338 real 0m0.084s 00:08:59.338 user 0m0.046s 00:08:59.338 sys 0m0.036s 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.338 ************************************ 00:08:59.338 END TEST dd_invalid_count 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:59.338 ************************************ 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.338 ************************************ 00:08:59.338 START TEST dd_invalid_oflag 00:08:59.338 ************************************ 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.338 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:59.598 [2024-12-10 10:22:34.593429] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.598 00:08:59.598 real 0m0.066s 00:08:59.598 user 0m0.038s 00:08:59.598 sys 0m0.027s 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 ************************************ 00:08:59.598 END TEST dd_invalid_oflag 00:08:59.598 ************************************ 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 ************************************ 00:08:59.598 START TEST dd_invalid_iflag 00:08:59.598 ************************************ 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:59.598 [2024-12-10 10:22:34.732953] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.598 00:08:59.598 real 0m0.083s 00:08:59.598 user 0m0.049s 00:08:59.598 sys 0m0.032s 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 ************************************ 00:08:59.598 END TEST dd_invalid_iflag 00:08:59.598 ************************************ 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.598 ************************************ 00:08:59.598 START TEST dd_unknown_flag 00:08:59.598 ************************************ 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.598 10:22:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:59.858 [2024-12-10 10:22:34.864210] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:59.858 [2024-12-10 10:22:34.864329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74758 ] 00:08:59.858 [2024-12-10 10:22:34.996566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.858 [2024-12-10 10:22:35.030094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.858 [2024-12-10 10:22:35.057633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.858 [2024-12-10 10:22:35.072652] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:59.858 [2024-12-10 10:22:35.072720] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.858 [2024-12-10 10:22:35.072784] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:59.858 [2024-12-10 10:22:35.072796] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.858 [2024-12-10 10:22:35.073018] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:59.858 [2024-12-10 10:22:35.073033] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.858 [2024-12-10 10:22:35.073132] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:59.858 [2024-12-10 10:22:35.073143] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:00.116 [2024-12-10 10:22:35.136890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.116 00:09:00.116 real 0m0.396s 00:09:00.116 user 0m0.197s 00:09:00.116 sys 0m0.103s 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.116 10:22:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:00.117 ************************************ 00:09:00.117 END TEST dd_unknown_flag 00:09:00.117 ************************************ 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.117 ************************************ 00:09:00.117 START TEST dd_invalid_json 00:09:00.117 ************************************ 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.117 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:00.117 [2024-12-10 10:22:35.320527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.117 [2024-12-10 10:22:35.320623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:09:00.375 [2024-12-10 10:22:35.459760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.375 [2024-12-10 10:22:35.493687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.375 [2024-12-10 10:22:35.493779] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:00.375 [2024-12-10 10:22:35.493792] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:00.375 [2024-12-10 10:22:35.493800] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.375 [2024-12-10 10:22:35.493833] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.375 00:09:00.375 real 0m0.308s 00:09:00.375 user 0m0.143s 00:09:00.375 sys 0m0.064s 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.375 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 ************************************ 00:09:00.375 END TEST dd_invalid_json 00:09:00.375 ************************************ 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.634 ************************************ 00:09:00.634 START TEST dd_invalid_seek 00:09:00.634 ************************************ 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:00.634 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.635 10:22:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:00.635 [2024-12-10 10:22:35.670299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.635 [2024-12-10 10:22:35.670397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74816 ] 00:09:00.635 { 00:09:00.635 "subsystems": [ 00:09:00.635 { 00:09:00.635 "subsystem": "bdev", 00:09:00.635 "config": [ 00:09:00.635 { 00:09:00.635 "params": { 00:09:00.635 "block_size": 512, 00:09:00.635 "num_blocks": 512, 00:09:00.635 "name": "malloc0" 00:09:00.635 }, 00:09:00.635 "method": "bdev_malloc_create" 00:09:00.635 }, 00:09:00.635 { 00:09:00.635 "params": { 00:09:00.635 "block_size": 512, 00:09:00.635 "num_blocks": 512, 00:09:00.635 "name": "malloc1" 00:09:00.635 }, 00:09:00.635 "method": "bdev_malloc_create" 00:09:00.635 }, 00:09:00.635 { 00:09:00.635 "method": "bdev_wait_for_examine" 00:09:00.635 } 00:09:00.635 ] 00:09:00.635 } 00:09:00.635 ] 00:09:00.635 } 00:09:00.635 [2024-12-10 10:22:35.800030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.635 [2024-12-10 10:22:35.843000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.893 [2024-12-10 10:22:35.875545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.893 [2024-12-10 10:22:35.917743] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:00.893 [2024-12-10 10:22:35.917828] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.893 [2024-12-10 10:22:35.978539] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:00.893 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:09:00.893 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.893 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.894 00:09:00.894 real 0m0.430s 00:09:00.894 user 0m0.278s 00:09:00.894 sys 0m0.114s 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:00.894 ************************************ 00:09:00.894 END TEST dd_invalid_seek 00:09:00.894 ************************************ 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.894 ************************************ 00:09:00.894 START TEST dd_invalid_skip 00:09:00.894 ************************************ 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.894 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:01.153 { 00:09:01.153 "subsystems": [ 00:09:01.153 { 00:09:01.153 "subsystem": "bdev", 00:09:01.153 "config": [ 00:09:01.153 { 00:09:01.153 "params": { 00:09:01.153 "block_size": 512, 00:09:01.153 "num_blocks": 512, 00:09:01.153 "name": "malloc0" 00:09:01.153 }, 00:09:01.153 "method": "bdev_malloc_create" 00:09:01.153 }, 00:09:01.153 { 00:09:01.153 "params": { 00:09:01.153 "block_size": 512, 00:09:01.153 "num_blocks": 512, 00:09:01.153 "name": "malloc1" 00:09:01.153 }, 00:09:01.153 "method": "bdev_malloc_create" 00:09:01.153 }, 00:09:01.153 { 00:09:01.153 "method": "bdev_wait_for_examine" 00:09:01.153 } 00:09:01.153 ] 00:09:01.153 } 00:09:01.153 ] 00:09:01.153 } 00:09:01.153 [2024-12-10 10:22:36.175372] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:01.153 [2024-12-10 10:22:36.175494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:09:01.153 [2024-12-10 10:22:36.313192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.153 [2024-12-10 10:22:36.346262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.153 [2024-12-10 10:22:36.374715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.413 [2024-12-10 10:22:36.417810] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:01.413 [2024-12-10 10:22:36.417916] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.413 [2024-12-10 10:22:36.478569] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.413 00:09:01.413 real 0m0.444s 00:09:01.413 user 0m0.287s 00:09:01.413 sys 0m0.118s 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 ************************************ 00:09:01.413 END TEST dd_invalid_skip 00:09:01.413 ************************************ 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 ************************************ 00:09:01.413 START TEST dd_invalid_input_count 00:09:01.413 ************************************ 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.413 10:22:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:01.672 [2024-12-10 10:22:36.667283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:01.672 [2024-12-10 10:22:36.667366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74883 ] 00:09:01.672 { 00:09:01.672 "subsystems": [ 00:09:01.672 { 00:09:01.672 "subsystem": "bdev", 00:09:01.672 "config": [ 00:09:01.672 { 00:09:01.672 "params": { 00:09:01.672 "block_size": 512, 00:09:01.672 "num_blocks": 512, 00:09:01.672 "name": "malloc0" 00:09:01.672 }, 00:09:01.672 "method": "bdev_malloc_create" 00:09:01.672 }, 00:09:01.672 { 00:09:01.672 "params": { 00:09:01.672 "block_size": 512, 00:09:01.672 "num_blocks": 512, 00:09:01.672 "name": "malloc1" 00:09:01.672 }, 00:09:01.672 "method": "bdev_malloc_create" 00:09:01.672 }, 00:09:01.672 { 00:09:01.672 "method": "bdev_wait_for_examine" 00:09:01.672 } 00:09:01.672 ] 00:09:01.672 } 00:09:01.672 ] 00:09:01.672 } 00:09:01.672 [2024-12-10 10:22:36.799416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.672 [2024-12-10 10:22:36.838208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.672 [2024-12-10 10:22:36.868274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.932 [2024-12-10 10:22:36.910428] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:01.932 [2024-12-10 10:22:36.910533] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.932 [2024-12-10 10:22:36.976208] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.932 00:09:01.932 real 0m0.437s 00:09:01.932 user 0m0.285s 00:09:01.932 sys 0m0.117s 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 ************************************ 00:09:01.932 END TEST dd_invalid_input_count 00:09:01.932 ************************************ 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 ************************************ 00:09:01.932 START TEST dd_invalid_output_count 00:09:01.932 ************************************ 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.932 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:02.192 { 00:09:02.192 "subsystems": [ 00:09:02.192 { 00:09:02.192 "subsystem": "bdev", 00:09:02.192 "config": [ 00:09:02.192 { 00:09:02.192 "params": { 00:09:02.192 "block_size": 512, 00:09:02.192 "num_blocks": 512, 00:09:02.192 "name": "malloc0" 00:09:02.192 }, 00:09:02.192 "method": "bdev_malloc_create" 00:09:02.192 }, 00:09:02.192 { 00:09:02.192 "method": "bdev_wait_for_examine" 00:09:02.192 } 00:09:02.192 ] 00:09:02.192 } 00:09:02.192 ] 00:09:02.192 } 00:09:02.192 [2024-12-10 10:22:37.163118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:02.192 [2024-12-10 10:22:37.163213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74915 ] 00:09:02.192 [2024-12-10 10:22:37.303221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.192 [2024-12-10 10:22:37.336957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.192 [2024-12-10 10:22:37.364658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.192 [2024-12-10 10:22:37.397307] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:02.192 [2024-12-10 10:22:37.397399] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.452 [2024-12-10 10:22:37.462286] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.452 00:09:02.452 real 0m0.435s 00:09:02.452 user 0m0.284s 00:09:02.452 sys 0m0.110s 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 ************************************ 00:09:02.452 END TEST dd_invalid_output_count 00:09:02.452 ************************************ 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 ************************************ 00:09:02.452 START TEST dd_bs_not_multiple 00:09:02.452 ************************************ 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:02.452 10:22:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:02.452 { 00:09:02.452 "subsystems": [ 00:09:02.452 { 00:09:02.452 "subsystem": "bdev", 00:09:02.452 "config": [ 00:09:02.452 { 00:09:02.452 "params": { 00:09:02.452 "block_size": 512, 00:09:02.452 "num_blocks": 512, 00:09:02.452 "name": "malloc0" 00:09:02.452 }, 00:09:02.452 "method": "bdev_malloc_create" 00:09:02.452 }, 00:09:02.452 { 00:09:02.452 "params": { 00:09:02.452 "block_size": 512, 00:09:02.452 "num_blocks": 512, 00:09:02.452 "name": "malloc1" 00:09:02.452 }, 00:09:02.452 "method": "bdev_malloc_create" 00:09:02.452 }, 00:09:02.452 { 00:09:02.452 "method": "bdev_wait_for_examine" 00:09:02.452 } 00:09:02.452 ] 00:09:02.452 } 00:09:02.452 ] 00:09:02.452 } 00:09:02.452 [2024-12-10 10:22:37.652888] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:02.452 [2024-12-10 10:22:37.652971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74948 ] 00:09:02.711 [2024-12-10 10:22:37.786762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.711 [2024-12-10 10:22:37.826900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.711 [2024-12-10 10:22:37.859341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.711 [2024-12-10 10:22:37.904171] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:02.711 [2024-12-10 10:22:37.904241] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.970 [2024-12-10 10:22:37.959977] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.970 00:09:02.970 real 0m0.435s 00:09:02.970 user 0m0.277s 00:09:02.970 sys 0m0.122s 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:02.970 ************************************ 00:09:02.970 END TEST dd_bs_not_multiple 00:09:02.970 ************************************ 00:09:02.970 00:09:02.970 real 0m5.194s 00:09:02.970 user 0m2.833s 00:09:02.970 sys 0m1.773s 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.970 10:22:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:02.970 ************************************ 00:09:02.970 END TEST spdk_dd_negative 00:09:02.970 ************************************ 00:09:02.970 00:09:02.970 real 1m4.462s 00:09:02.970 user 0m40.765s 00:09:02.970 sys 0m27.287s 00:09:02.970 10:22:38 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.970 10:22:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:02.970 ************************************ 00:09:02.970 END TEST spdk_dd 00:09:02.970 ************************************ 00:09:02.970 10:22:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:02.970 10:22:38 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:02.970 10:22:38 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:02.970 10:22:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.970 10:22:38 -- common/autotest_common.sh@10 -- # set +x 00:09:03.229 10:22:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:03.229 10:22:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:03.229 10:22:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:03.229 10:22:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:03.229 10:22:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:03.229 10:22:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:03.229 10:22:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:03.229 10:22:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.229 10:22:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.229 10:22:38 -- common/autotest_common.sh@10 -- # set +x 00:09:03.229 ************************************ 00:09:03.229 START TEST nvmf_tcp 00:09:03.229 ************************************ 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:03.229 * Looking for test storage... 00:09:03.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.229 10:22:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.229 --rc genhtml_branch_coverage=1 00:09:03.229 --rc genhtml_function_coverage=1 00:09:03.229 --rc genhtml_legend=1 00:09:03.229 --rc geninfo_all_blocks=1 00:09:03.229 --rc geninfo_unexecuted_blocks=1 00:09:03.229 00:09:03.229 ' 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.229 --rc genhtml_branch_coverage=1 00:09:03.229 --rc genhtml_function_coverage=1 00:09:03.229 --rc genhtml_legend=1 00:09:03.229 --rc geninfo_all_blocks=1 00:09:03.229 --rc geninfo_unexecuted_blocks=1 00:09:03.229 00:09:03.229 ' 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.229 --rc genhtml_branch_coverage=1 00:09:03.229 --rc genhtml_function_coverage=1 00:09:03.229 --rc genhtml_legend=1 00:09:03.229 --rc geninfo_all_blocks=1 00:09:03.229 --rc geninfo_unexecuted_blocks=1 00:09:03.229 00:09:03.229 ' 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.229 --rc genhtml_branch_coverage=1 00:09:03.229 --rc genhtml_function_coverage=1 00:09:03.229 --rc genhtml_legend=1 00:09:03.229 --rc geninfo_all_blocks=1 00:09:03.229 --rc geninfo_unexecuted_blocks=1 00:09:03.229 00:09:03.229 ' 00:09:03.229 10:22:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:03.229 10:22:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:03.229 10:22:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.229 10:22:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.229 ************************************ 00:09:03.229 START TEST nvmf_target_core 00:09:03.229 ************************************ 00:09:03.229 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:03.489 * Looking for test storage... 00:09:03.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.489 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.489 --rc genhtml_branch_coverage=1 00:09:03.489 --rc genhtml_function_coverage=1 00:09:03.489 --rc genhtml_legend=1 00:09:03.489 --rc geninfo_all_blocks=1 00:09:03.489 --rc geninfo_unexecuted_blocks=1 00:09:03.489 00:09:03.489 ' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.490 --rc genhtml_branch_coverage=1 00:09:03.490 --rc genhtml_function_coverage=1 00:09:03.490 --rc genhtml_legend=1 00:09:03.490 --rc geninfo_all_blocks=1 00:09:03.490 --rc geninfo_unexecuted_blocks=1 00:09:03.490 00:09:03.490 ' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.490 --rc genhtml_branch_coverage=1 00:09:03.490 --rc genhtml_function_coverage=1 00:09:03.490 --rc genhtml_legend=1 00:09:03.490 --rc geninfo_all_blocks=1 00:09:03.490 --rc geninfo_unexecuted_blocks=1 00:09:03.490 00:09:03.490 ' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.490 --rc genhtml_branch_coverage=1 00:09:03.490 --rc genhtml_function_coverage=1 00:09:03.490 --rc genhtml_legend=1 00:09:03.490 --rc geninfo_all_blocks=1 00:09:03.490 --rc geninfo_unexecuted_blocks=1 00:09:03.490 00:09:03.490 ' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.490 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.490 ************************************ 00:09:03.490 START TEST nvmf_host_management 00:09:03.490 ************************************ 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.490 * Looking for test storage... 00:09:03.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.490 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.749 --rc genhtml_branch_coverage=1 00:09:03.749 --rc genhtml_function_coverage=1 00:09:03.749 --rc genhtml_legend=1 00:09:03.749 --rc geninfo_all_blocks=1 00:09:03.749 --rc geninfo_unexecuted_blocks=1 00:09:03.749 00:09:03.749 ' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.749 --rc genhtml_branch_coverage=1 00:09:03.749 --rc genhtml_function_coverage=1 00:09:03.749 --rc genhtml_legend=1 00:09:03.749 --rc geninfo_all_blocks=1 00:09:03.749 --rc geninfo_unexecuted_blocks=1 00:09:03.749 00:09:03.749 ' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.749 --rc genhtml_branch_coverage=1 00:09:03.749 --rc genhtml_function_coverage=1 00:09:03.749 --rc genhtml_legend=1 00:09:03.749 --rc geninfo_all_blocks=1 00:09:03.749 --rc geninfo_unexecuted_blocks=1 00:09:03.749 00:09:03.749 ' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.749 --rc genhtml_branch_coverage=1 00:09:03.749 --rc genhtml_function_coverage=1 00:09:03.749 --rc genhtml_legend=1 00:09:03.749 --rc geninfo_all_blocks=1 00:09:03.749 --rc geninfo_unexecuted_blocks=1 00:09:03.749 00:09:03.749 ' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.749 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:03.749 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:03.750 Cannot find device "nvmf_init_br" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:03.750 Cannot find device "nvmf_init_br2" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:03.750 Cannot find device "nvmf_tgt_br" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.750 Cannot find device "nvmf_tgt_br2" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:03.750 Cannot find device "nvmf_init_br" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:03.750 Cannot find device "nvmf_init_br2" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:03.750 Cannot find device "nvmf_tgt_br" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:03.750 Cannot find device "nvmf_tgt_br2" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:03.750 Cannot find device "nvmf_br" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:03.750 Cannot find device "nvmf_init_if" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:03.750 Cannot find device "nvmf_init_if2" 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.750 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.023 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.023 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:04.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:09:04.294 00:09:04.294 --- 10.0.0.3 ping statistics --- 00:09:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.294 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:04.294 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:04.294 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:09:04.294 00:09:04.294 --- 10.0.0.4 ping statistics --- 00:09:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.294 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:04.294 00:09:04.294 --- 10.0.0.1 ping statistics --- 00:09:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.294 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:04.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:09:04.294 00:09:04.294 --- 10.0.0.2 ping statistics --- 00:09:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.294 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=75293 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 75293 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 75293 ']' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.294 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.294 [2024-12-10 10:22:39.374375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:04.294 [2024-12-10 10:22:39.374678] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.294 [2024-12-10 10:22:39.515904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.553 [2024-12-10 10:22:39.562683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.553 [2024-12-10 10:22:39.562953] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.553 [2024-12-10 10:22:39.563206] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.553 [2024-12-10 10:22:39.563365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.553 [2024-12-10 10:22:39.563572] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.553 [2024-12-10 10:22:39.563752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.553 [2024-12-10 10:22:39.563966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.553 [2024-12-10 10:22:39.564072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.553 [2024-12-10 10:22:39.564073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:04.553 [2024-12-10 10:22:39.599349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.553 [2024-12-10 10:22:39.700595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.553 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.553 Malloc0 00:09:04.553 [2024-12-10 10:22:39.762182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:04.554 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.554 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:04.554 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:04.554 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=75334 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 75334 /var/tmp/bdevperf.sock 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 75334 ']' 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:04.813 { 00:09:04.813 "params": { 00:09:04.813 "name": "Nvme$subsystem", 00:09:04.813 "trtype": "$TEST_TRANSPORT", 00:09:04.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.813 "adrfam": "ipv4", 00:09:04.813 "trsvcid": "$NVMF_PORT", 00:09:04.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.813 "hdgst": ${hdgst:-false}, 00:09:04.813 "ddgst": ${ddgst:-false} 00:09:04.813 }, 00:09:04.813 "method": "bdev_nvme_attach_controller" 00:09:04.813 } 00:09:04.813 EOF 00:09:04.813 )") 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:04.813 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:04.813 "params": { 00:09:04.813 "name": "Nvme0", 00:09:04.813 "trtype": "tcp", 00:09:04.813 "traddr": "10.0.0.3", 00:09:04.813 "adrfam": "ipv4", 00:09:04.813 "trsvcid": "4420", 00:09:04.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:04.813 "hdgst": false, 00:09:04.813 "ddgst": false 00:09:04.813 }, 00:09:04.813 "method": "bdev_nvme_attach_controller" 00:09:04.813 }' 00:09:04.813 [2024-12-10 10:22:39.878020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:04.813 [2024-12-10 10:22:39.878128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75334 ] 00:09:04.813 [2024-12-10 10:22:40.019974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.072 [2024-12-10 10:22:40.063107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.072 [2024-12-10 10:22:40.105943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.072 Running I/O for 10 seconds... 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:05.072 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:05.073 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.073 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:05.073 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.073 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:05.332 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:05.332 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.592 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.592 [2024-12-10 10:22:40.650154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.592 [2024-12-10 10:22:40.650430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.650987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.650998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.593 [2024-12-10 10:22:40.651318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.593 [2024-12-10 10:22:40.651330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.594 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:05.594 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.594 [2024-12-10 10:22:40.651781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:05.594 [2024-12-10 10:22:40.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.594 [2024-12-10 10:22:40.651857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae460 is same with the state(6) to be set 00:09:05.594 [2024-12-10 10:22:40.651908] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ae460 was disconnected and freed. reset controller. 00:09:05.594 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.594 [2024-12-10 10:22:40.653132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:05.594 task offset: 89984 on job bdev=Nvme0n1 fails 00:09:05.594 00:09:05.594 Latency(us) 00:09:05.594 [2024-12-10T10:22:40.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.594 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:05.594 Job: Nvme0n1 ended in about 0.45 seconds with error 00:09:05.594 Verification LBA range: start 0x0 length 0x400 00:09:05.594 Nvme0n1 : 0.45 1432.28 89.52 143.23 0.00 39295.01 6285.50 39559.91 00:09:05.594 [2024-12-10T10:22:40.821Z] =================================================================================================================== 00:09:05.594 [2024-12-10T10:22:40.821Z] Total : 1432.28 89.52 143.23 0.00 39295.01 6285.50 39559.91 00:09:05.594 [2024-12-10 10:22:40.655216] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.594 [2024-12-10 10:22:40.655242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21447a0 (9): Bad file descriptor 00:09:05.594 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.594 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:05.594 [2024-12-10 10:22:40.664631] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 75334 00:09:06.530 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (75334) - No such process 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:06.530 { 00:09:06.530 "params": { 00:09:06.530 "name": "Nvme$subsystem", 00:09:06.530 "trtype": "$TEST_TRANSPORT", 00:09:06.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.530 "adrfam": "ipv4", 00:09:06.530 "trsvcid": "$NVMF_PORT", 00:09:06.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.530 "hdgst": ${hdgst:-false}, 00:09:06.530 "ddgst": ${ddgst:-false} 00:09:06.530 }, 00:09:06.530 "method": "bdev_nvme_attach_controller" 00:09:06.530 } 00:09:06.530 EOF 00:09:06.530 )") 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:06.530 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:06.530 "params": { 00:09:06.530 "name": "Nvme0", 00:09:06.530 "trtype": "tcp", 00:09:06.530 "traddr": "10.0.0.3", 00:09:06.530 "adrfam": "ipv4", 00:09:06.530 "trsvcid": "4420", 00:09:06.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:06.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:06.530 "hdgst": false, 00:09:06.530 "ddgst": false 00:09:06.530 }, 00:09:06.530 "method": "bdev_nvme_attach_controller" 00:09:06.530 }' 00:09:06.530 [2024-12-10 10:22:41.729200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:06.530 [2024-12-10 10:22:41.729476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75374 ] 00:09:06.789 [2024-12-10 10:22:41.869269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.789 [2024-12-10 10:22:41.904660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.789 [2024-12-10 10:22:41.942993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.049 Running I/O for 1 seconds... 00:09:07.986 1600.00 IOPS, 100.00 MiB/s 00:09:07.986 Latency(us) 00:09:07.986 [2024-12-10T10:22:43.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.986 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:07.986 Verification LBA range: start 0x0 length 0x400 00:09:07.986 Nvme0n1 : 1.03 1617.08 101.07 0.00 0.00 38829.61 3902.37 35270.28 00:09:07.986 [2024-12-10T10:22:43.213Z] =================================================================================================================== 00:09:07.986 [2024-12-10T10:22:43.213Z] Total : 1617.08 101.07 0.00 0.00 38829.61 3902.37 35270.28 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.245 rmmod nvme_tcp 00:09:08.245 rmmod nvme_fabrics 00:09:08.245 rmmod nvme_keyring 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 75293 ']' 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 75293 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 75293 ']' 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 75293 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75293 00:09:08.245 killing process with pid 75293 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75293' 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 75293 00:09:08.245 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 75293 00:09:08.504 [2024-12-10 10:22:43.522529] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:08.504 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:08.764 00:09:08.764 real 0m5.197s 00:09:08.764 user 0m18.139s 00:09:08.764 sys 0m1.406s 00:09:08.764 ************************************ 00:09:08.764 END TEST nvmf_host_management 00:09:08.764 ************************************ 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.764 ************************************ 00:09:08.764 START TEST nvmf_lvol 00:09:08.764 ************************************ 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.764 * Looking for test storage... 00:09:08.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:09:08.764 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.023 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.024 --rc genhtml_branch_coverage=1 00:09:09.024 --rc genhtml_function_coverage=1 00:09:09.024 --rc genhtml_legend=1 00:09:09.024 --rc geninfo_all_blocks=1 00:09:09.024 --rc geninfo_unexecuted_blocks=1 00:09:09.024 00:09:09.024 ' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.024 --rc genhtml_branch_coverage=1 00:09:09.024 --rc genhtml_function_coverage=1 00:09:09.024 --rc genhtml_legend=1 00:09:09.024 --rc geninfo_all_blocks=1 00:09:09.024 --rc geninfo_unexecuted_blocks=1 00:09:09.024 00:09:09.024 ' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.024 --rc genhtml_branch_coverage=1 00:09:09.024 --rc genhtml_function_coverage=1 00:09:09.024 --rc genhtml_legend=1 00:09:09.024 --rc geninfo_all_blocks=1 00:09:09.024 --rc geninfo_unexecuted_blocks=1 00:09:09.024 00:09:09.024 ' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.024 --rc genhtml_branch_coverage=1 00:09:09.024 --rc genhtml_function_coverage=1 00:09:09.024 --rc genhtml_legend=1 00:09:09.024 --rc geninfo_all_blocks=1 00:09:09.024 --rc geninfo_unexecuted_blocks=1 00:09:09.024 00:09:09.024 ' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.024 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:09.024 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.025 Cannot find device "nvmf_init_br" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.025 Cannot find device "nvmf_init_br2" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.025 Cannot find device "nvmf_tgt_br" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.025 Cannot find device "nvmf_tgt_br2" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.025 Cannot find device "nvmf_init_br" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.025 Cannot find device "nvmf_init_br2" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.025 Cannot find device "nvmf_tgt_br" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.025 Cannot find device "nvmf_tgt_br2" 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:09.025 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.025 Cannot find device "nvmf_br" 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.285 Cannot find device "nvmf_init_if" 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.285 Cannot find device "nvmf_init_if2" 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.285 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:09:09.545 00:09:09.545 --- 10.0.0.3 ping statistics --- 00:09:09.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.545 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.545 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.545 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:09:09.545 00:09:09.545 --- 10.0.0.4 ping statistics --- 00:09:09.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.545 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:09.545 00:09:09.545 --- 10.0.0.1 ping statistics --- 00:09:09.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.545 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:09:09.545 00:09:09.545 --- 10.0.0.2 ping statistics --- 00:09:09.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.545 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=75641 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 75641 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 75641 ']' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.545 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.545 [2024-12-10 10:22:44.667654] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.545 [2024-12-10 10:22:44.667803] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.805 [2024-12-10 10:22:44.805676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.805 [2024-12-10 10:22:44.849769] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.805 [2024-12-10 10:22:44.849834] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.805 [2024-12-10 10:22:44.849849] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.805 [2024-12-10 10:22:44.849859] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.805 [2024-12-10 10:22:44.849869] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.805 [2024-12-10 10:22:44.850008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.805 [2024-12-10 10:22:44.850703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.805 [2024-12-10 10:22:44.850715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.805 [2024-12-10 10:22:44.888199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.805 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.065 [2024-12-10 10:22:45.204532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.065 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.632 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:10.632 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.632 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:10.632 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:11.199 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:11.459 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e19e589f-3c1f-4aaa-8122-c5a2c7dc23d4 00:09:11.459 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e19e589f-3c1f-4aaa-8122-c5a2c7dc23d4 lvol 20 00:09:11.718 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=199f5654-7c6c-41c4-85fd-4048a70bc083 00:09:11.718 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.977 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 199f5654-7c6c-41c4-85fd-4048a70bc083 00:09:11.977 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:12.236 [2024-12-10 10:22:47.409750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.236 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:12.495 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:12.495 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=75709 00:09:12.495 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:13.873 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 199f5654-7c6c-41c4-85fd-4048a70bc083 MY_SNAPSHOT 00:09:13.873 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e389d8cc-b6e5-4745-9f8e-abad4fd74114 00:09:13.873 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 199f5654-7c6c-41c4-85fd-4048a70bc083 30 00:09:14.132 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e389d8cc-b6e5-4745-9f8e-abad4fd74114 MY_CLONE 00:09:14.391 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0ff66b1c-3ddb-4dee-b947-bb46003e713e 00:09:14.391 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0ff66b1c-3ddb-4dee-b947-bb46003e713e 00:09:14.958 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 75709 00:09:23.075 Initializing NVMe Controllers 00:09:23.075 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:23.075 Controller IO queue size 128, less than required. 00:09:23.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.075 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:23.075 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:23.076 Initialization complete. Launching workers. 00:09:23.076 ======================================================== 00:09:23.076 Latency(us) 00:09:23.076 Device Information : IOPS MiB/s Average min max 00:09:23.076 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10387.00 40.57 12326.37 1652.62 55067.86 00:09:23.076 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10441.40 40.79 12262.79 3269.77 103748.71 00:09:23.076 ======================================================== 00:09:23.076 Total : 20828.40 81.36 12294.50 1652.62 103748.71 00:09:23.076 00:09:23.076 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.076 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 199f5654-7c6c-41c4-85fd-4048a70bc083 00:09:23.334 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e19e589f-3c1f-4aaa-8122-c5a2c7dc23d4 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:23.593 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.594 rmmod nvme_tcp 00:09:23.594 rmmod nvme_fabrics 00:09:23.594 rmmod nvme_keyring 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 75641 ']' 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 75641 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 75641 ']' 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 75641 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.594 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75641 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.853 killing process with pid 75641 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75641' 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 75641 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 75641 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:23.853 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:23.853 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.853 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:23.853 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:23.853 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:23.853 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:23.853 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:24.112 00:09:24.112 real 0m15.379s 00:09:24.112 user 1m3.412s 00:09:24.112 sys 0m4.078s 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.112 ************************************ 00:09:24.112 END TEST nvmf_lvol 00:09:24.112 ************************************ 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.112 ************************************ 00:09:24.112 START TEST nvmf_lvs_grow 00:09:24.112 ************************************ 00:09:24.112 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:24.372 * Looking for test storage... 00:09:24.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.372 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.373 --rc genhtml_branch_coverage=1 00:09:24.373 --rc genhtml_function_coverage=1 00:09:24.373 --rc genhtml_legend=1 00:09:24.373 --rc geninfo_all_blocks=1 00:09:24.373 --rc geninfo_unexecuted_blocks=1 00:09:24.373 00:09:24.373 ' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.373 --rc genhtml_branch_coverage=1 00:09:24.373 --rc genhtml_function_coverage=1 00:09:24.373 --rc genhtml_legend=1 00:09:24.373 --rc geninfo_all_blocks=1 00:09:24.373 --rc geninfo_unexecuted_blocks=1 00:09:24.373 00:09:24.373 ' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.373 --rc genhtml_branch_coverage=1 00:09:24.373 --rc genhtml_function_coverage=1 00:09:24.373 --rc genhtml_legend=1 00:09:24.373 --rc geninfo_all_blocks=1 00:09:24.373 --rc geninfo_unexecuted_blocks=1 00:09:24.373 00:09:24.373 ' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.373 --rc genhtml_branch_coverage=1 00:09:24.373 --rc genhtml_function_coverage=1 00:09:24.373 --rc genhtml_legend=1 00:09:24.373 --rc geninfo_all_blocks=1 00:09:24.373 --rc geninfo_unexecuted_blocks=1 00:09:24.373 00:09:24.373 ' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.373 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:24.374 Cannot find device "nvmf_init_br" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:24.374 Cannot find device "nvmf_init_br2" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:24.374 Cannot find device "nvmf_tgt_br" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.374 Cannot find device "nvmf_tgt_br2" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:24.374 Cannot find device "nvmf_init_br" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:24.374 Cannot find device "nvmf_init_br2" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:24.374 Cannot find device "nvmf_tgt_br" 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:24.374 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:24.633 Cannot find device "nvmf_tgt_br2" 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:24.633 Cannot find device "nvmf_br" 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:24.633 Cannot find device "nvmf_init_if" 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:24.633 Cannot find device "nvmf_init_if2" 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.633 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:24.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:24.893 00:09:24.893 --- 10.0.0.3 ping statistics --- 00:09:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.893 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:24.893 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:24.893 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:24.893 00:09:24.893 --- 10.0.0.4 ping statistics --- 00:09:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.893 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:24.893 00:09:24.893 --- 10.0.0.1 ping statistics --- 00:09:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.893 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:24.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:24.893 00:09:24.893 --- 10.0.0.2 ping statistics --- 00:09:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.893 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=76082 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 76082 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 76082 ']' 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.893 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.893 [2024-12-10 10:22:59.963938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:24.893 [2024-12-10 10:22:59.964059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.893 [2024-12-10 10:23:00.100598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.152 [2024-12-10 10:23:00.136058] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.152 [2024-12-10 10:23:00.136131] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.152 [2024-12-10 10:23:00.136158] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.152 [2024-12-10 10:23:00.136165] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.152 [2024-12-10 10:23:00.136171] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.152 [2024-12-10 10:23:00.136197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.152 [2024-12-10 10:23:00.165394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.089 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:26.089 [2024-12-10 10:23:01.209258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.089 ************************************ 00:09:26.089 START TEST lvs_grow_clean 00:09:26.089 ************************************ 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:26.089 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.657 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.657 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.915 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=981362a3-5acc-4729-9f44-6679124cc6b5 00:09:26.915 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.915 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:27.174 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:27.174 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:27.174 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 981362a3-5acc-4729-9f44-6679124cc6b5 lvol 150 00:09:27.432 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7954d5d1-a792-438d-b488-89bff42712f4 00:09:27.432 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:27.432 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.690 [2024-12-10 10:23:02.658281] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.690 [2024-12-10 10:23:02.658392] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.690 true 00:09:27.690 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:27.690 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.948 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.948 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.948 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7954d5d1-a792-438d-b488-89bff42712f4 00:09:28.207 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:28.465 [2024-12-10 10:23:03.678900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:28.755 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76170 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76170 /var/tmp/bdevperf.sock 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 76170 ']' 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.014 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:29.014 [2024-12-10 10:23:04.057641] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:29.014 [2024-12-10 10:23:04.057759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76170 ] 00:09:29.014 [2024-12-10 10:23:04.191643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.014 [2024-12-10 10:23:04.228451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.272 [2024-12-10 10:23:04.262295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.272 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.272 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:29.272 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.530 Nvme0n1 00:09:29.530 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.788 [ 00:09:29.788 { 00:09:29.788 "name": "Nvme0n1", 00:09:29.788 "aliases": [ 00:09:29.788 "7954d5d1-a792-438d-b488-89bff42712f4" 00:09:29.788 ], 00:09:29.788 "product_name": "NVMe disk", 00:09:29.788 "block_size": 4096, 00:09:29.788 "num_blocks": 38912, 00:09:29.788 "uuid": "7954d5d1-a792-438d-b488-89bff42712f4", 00:09:29.788 "numa_id": -1, 00:09:29.788 "assigned_rate_limits": { 00:09:29.788 "rw_ios_per_sec": 0, 00:09:29.788 "rw_mbytes_per_sec": 0, 00:09:29.788 "r_mbytes_per_sec": 0, 00:09:29.788 "w_mbytes_per_sec": 0 00:09:29.788 }, 00:09:29.788 "claimed": false, 00:09:29.788 "zoned": false, 00:09:29.788 "supported_io_types": { 00:09:29.788 "read": true, 00:09:29.788 "write": true, 00:09:29.788 "unmap": true, 00:09:29.788 "flush": true, 00:09:29.788 "reset": true, 00:09:29.788 "nvme_admin": true, 00:09:29.788 "nvme_io": true, 00:09:29.788 "nvme_io_md": false, 00:09:29.788 "write_zeroes": true, 00:09:29.788 "zcopy": false, 00:09:29.788 "get_zone_info": false, 00:09:29.788 "zone_management": false, 00:09:29.788 "zone_append": false, 00:09:29.788 "compare": true, 00:09:29.788 "compare_and_write": true, 00:09:29.788 "abort": true, 00:09:29.788 "seek_hole": false, 00:09:29.788 "seek_data": false, 00:09:29.788 "copy": true, 00:09:29.788 "nvme_iov_md": false 00:09:29.788 }, 00:09:29.788 "memory_domains": [ 00:09:29.788 { 00:09:29.788 "dma_device_id": "system", 00:09:29.788 "dma_device_type": 1 00:09:29.788 } 00:09:29.788 ], 00:09:29.788 "driver_specific": { 00:09:29.788 "nvme": [ 00:09:29.788 { 00:09:29.788 "trid": { 00:09:29.788 "trtype": "TCP", 00:09:29.788 "adrfam": "IPv4", 00:09:29.788 "traddr": "10.0.0.3", 00:09:29.788 "trsvcid": "4420", 00:09:29.788 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.788 }, 00:09:29.788 "ctrlr_data": { 00:09:29.788 "cntlid": 1, 00:09:29.788 "vendor_id": "0x8086", 00:09:29.788 "model_number": "SPDK bdev Controller", 00:09:29.788 "serial_number": "SPDK0", 00:09:29.788 "firmware_revision": "24.09.1", 00:09:29.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.788 "oacs": { 00:09:29.788 "security": 0, 00:09:29.788 "format": 0, 00:09:29.788 "firmware": 0, 00:09:29.788 "ns_manage": 0 00:09:29.788 }, 00:09:29.788 "multi_ctrlr": true, 00:09:29.788 "ana_reporting": false 00:09:29.788 }, 00:09:29.788 "vs": { 00:09:29.788 "nvme_version": "1.3" 00:09:29.788 }, 00:09:29.788 "ns_data": { 00:09:29.788 "id": 1, 00:09:29.788 "can_share": true 00:09:29.788 } 00:09:29.788 } 00:09:29.788 ], 00:09:29.788 "mp_policy": "active_passive" 00:09:29.788 } 00:09:29.788 } 00:09:29.788 ] 00:09:29.789 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76186 00:09:29.789 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.789 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.789 Running I/O for 10 seconds... 00:09:31.171 Latency(us) 00:09:31.171 [2024-12-10T10:23:06.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.171 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:31.171 [2024-12-10T10:23:06.398Z] =================================================================================================================== 00:09:31.171 [2024-12-10T10:23:06.398Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:31.171 00:09:31.738 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:31.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.997 Nvme0n1 : 2.00 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:09:31.997 [2024-12-10T10:23:07.224Z] =================================================================================================================== 00:09:31.997 [2024-12-10T10:23:07.224Z] Total : 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:09:31.997 00:09:32.256 true 00:09:32.256 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:32.256 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.515 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.515 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.515 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 76186 00:09:33.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.081 Nvme0n1 : 3.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:33.081 [2024-12-10T10:23:08.308Z] =================================================================================================================== 00:09:33.081 [2024-12-10T10:23:08.308Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:33.081 00:09:34.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.016 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:34.016 [2024-12-10T10:23:09.243Z] =================================================================================================================== 00:09:34.016 [2024-12-10T10:23:09.243Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:34.016 00:09:34.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.952 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:34.952 [2024-12-10T10:23:10.179Z] =================================================================================================================== 00:09:34.952 [2024-12-10T10:23:10.179Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:34.952 00:09:35.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.887 Nvme0n1 : 6.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:35.887 [2024-12-10T10:23:11.115Z] =================================================================================================================== 00:09:35.888 [2024-12-10T10:23:11.115Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:35.888 00:09:36.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.822 Nvme0n1 : 7.00 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:09:36.822 [2024-12-10T10:23:12.049Z] =================================================================================================================== 00:09:36.822 [2024-12-10T10:23:12.049Z] Total : 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:09:36.822 00:09:38.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.197 Nvme0n1 : 8.00 6429.38 25.11 0.00 0.00 0.00 0.00 0.00 00:09:38.197 [2024-12-10T10:23:13.424Z] =================================================================================================================== 00:09:38.197 [2024-12-10T10:23:13.424Z] Total : 6429.38 25.11 0.00 0.00 0.00 0.00 0.00 00:09:38.197 00:09:39.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.134 Nvme0n1 : 9.00 6420.56 25.08 0.00 0.00 0.00 0.00 0.00 00:09:39.134 [2024-12-10T10:23:14.361Z] =================================================================================================================== 00:09:39.134 [2024-12-10T10:23:14.361Z] Total : 6420.56 25.08 0.00 0.00 0.00 0.00 0.00 00:09:39.134 00:09:40.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.094 Nvme0n1 : 10.00 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:09:40.094 [2024-12-10T10:23:15.321Z] =================================================================================================================== 00:09:40.094 [2024-12-10T10:23:15.321Z] Total : 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:09:40.094 00:09:40.094 00:09:40.094 Latency(us) 00:09:40.094 [2024-12-10T10:23:15.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.094 Nvme0n1 : 10.01 6409.95 25.04 0.00 0.00 19963.11 17039.36 42181.35 00:09:40.094 [2024-12-10T10:23:15.321Z] =================================================================================================================== 00:09:40.094 [2024-12-10T10:23:15.321Z] Total : 6409.95 25.04 0.00 0.00 19963.11 17039.36 42181.35 00:09:40.094 { 00:09:40.094 "results": [ 00:09:40.094 { 00:09:40.094 "job": "Nvme0n1", 00:09:40.094 "core_mask": "0x2", 00:09:40.094 "workload": "randwrite", 00:09:40.094 "status": "finished", 00:09:40.094 "queue_depth": 128, 00:09:40.094 "io_size": 4096, 00:09:40.094 "runtime": 10.005689, 00:09:40.094 "iops": 6409.953377523527, 00:09:40.094 "mibps": 25.038880380951277, 00:09:40.094 "io_failed": 0, 00:09:40.094 "io_timeout": 0, 00:09:40.094 "avg_latency_us": 19963.11264018506, 00:09:40.094 "min_latency_us": 17039.36, 00:09:40.094 "max_latency_us": 42181.35272727273 00:09:40.094 } 00:09:40.094 ], 00:09:40.094 "core_count": 1 00:09:40.094 } 00:09:40.094 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76170 00:09:40.094 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 76170 ']' 00:09:40.094 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 76170 00:09:40.094 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:40.094 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76170 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:40.095 killing process with pid 76170 00:09:40.095 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.095 00:09:40.095 Latency(us) 00:09:40.095 [2024-12-10T10:23:15.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.095 [2024-12-10T10:23:15.322Z] =================================================================================================================== 00:09:40.095 [2024-12-10T10:23:15.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76170' 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 76170 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 76170 00:09:40.095 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:40.353 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.611 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:40.611 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.871 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.871 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:40.871 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.130 [2024-12-10 10:23:16.278977] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:41.130 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:41.389 request: 00:09:41.389 { 00:09:41.389 "uuid": "981362a3-5acc-4729-9f44-6679124cc6b5", 00:09:41.389 "method": "bdev_lvol_get_lvstores", 00:09:41.389 "req_id": 1 00:09:41.389 } 00:09:41.389 Got JSON-RPC error response 00:09:41.389 response: 00:09:41.389 { 00:09:41.389 "code": -19, 00:09:41.389 "message": "No such device" 00:09:41.389 } 00:09:41.389 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:41.389 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.389 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:41.389 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.389 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.648 aio_bdev 00:09:41.648 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7954d5d1-a792-438d-b488-89bff42712f4 00:09:41.649 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=7954d5d1-a792-438d-b488-89bff42712f4 00:09:41.649 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.649 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:41.649 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.649 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.649 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.908 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7954d5d1-a792-438d-b488-89bff42712f4 -t 2000 00:09:42.166 [ 00:09:42.166 { 00:09:42.166 "name": "7954d5d1-a792-438d-b488-89bff42712f4", 00:09:42.166 "aliases": [ 00:09:42.166 "lvs/lvol" 00:09:42.166 ], 00:09:42.166 "product_name": "Logical Volume", 00:09:42.166 "block_size": 4096, 00:09:42.166 "num_blocks": 38912, 00:09:42.166 "uuid": "7954d5d1-a792-438d-b488-89bff42712f4", 00:09:42.166 "assigned_rate_limits": { 00:09:42.166 "rw_ios_per_sec": 0, 00:09:42.166 "rw_mbytes_per_sec": 0, 00:09:42.166 "r_mbytes_per_sec": 0, 00:09:42.166 "w_mbytes_per_sec": 0 00:09:42.166 }, 00:09:42.166 "claimed": false, 00:09:42.166 "zoned": false, 00:09:42.166 "supported_io_types": { 00:09:42.166 "read": true, 00:09:42.166 "write": true, 00:09:42.166 "unmap": true, 00:09:42.166 "flush": false, 00:09:42.166 "reset": true, 00:09:42.166 "nvme_admin": false, 00:09:42.166 "nvme_io": false, 00:09:42.166 "nvme_io_md": false, 00:09:42.166 "write_zeroes": true, 00:09:42.166 "zcopy": false, 00:09:42.166 "get_zone_info": false, 00:09:42.166 "zone_management": false, 00:09:42.166 "zone_append": false, 00:09:42.166 "compare": false, 00:09:42.166 "compare_and_write": false, 00:09:42.166 "abort": false, 00:09:42.166 "seek_hole": true, 00:09:42.166 "seek_data": true, 00:09:42.166 "copy": false, 00:09:42.166 "nvme_iov_md": false 00:09:42.166 }, 00:09:42.166 "driver_specific": { 00:09:42.166 "lvol": { 00:09:42.166 "lvol_store_uuid": "981362a3-5acc-4729-9f44-6679124cc6b5", 00:09:42.166 "base_bdev": "aio_bdev", 00:09:42.166 "thin_provision": false, 00:09:42.166 "num_allocated_clusters": 38, 00:09:42.166 "snapshot": false, 00:09:42.166 "clone": false, 00:09:42.166 "esnap_clone": false 00:09:42.166 } 00:09:42.166 } 00:09:42.166 } 00:09:42.166 ] 00:09:42.166 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:42.166 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:42.166 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:42.425 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:42.425 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:42.425 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:42.684 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:42.684 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7954d5d1-a792-438d-b488-89bff42712f4 00:09:42.943 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 981362a3-5acc-4729-9f44-6679124cc6b5 00:09:43.201 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.459 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.718 ************************************ 00:09:43.718 END TEST lvs_grow_clean 00:09:43.718 ************************************ 00:09:43.718 00:09:43.718 real 0m17.615s 00:09:43.718 user 0m16.633s 00:09:43.718 sys 0m2.307s 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.718 ************************************ 00:09:43.718 START TEST lvs_grow_dirty 00:09:43.718 ************************************ 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.718 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.285 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:44.286 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:44.286 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:44.286 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:44.286 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:44.544 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:44.544 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:44.544 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0a3acf2e-7232-48a4-a969-5da52302e0ea lvol 150 00:09:44.803 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:09:44.803 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.803 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:45.062 [2024-12-10 10:23:20.144169] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:45.062 [2024-12-10 10:23:20.144279] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:45.062 true 00:09:45.062 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:45.062 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:45.321 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:45.321 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.580 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:09:45.838 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:46.097 [2024-12-10 10:23:21.172660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:46.097 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:46.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76432 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76432 /var/tmp/bdevperf.sock 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 76432 ']' 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.356 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.356 [2024-12-10 10:23:21.529497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:46.356 [2024-12-10 10:23:21.529612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76432 ] 00:09:46.615 [2024-12-10 10:23:21.661959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.615 [2024-12-10 10:23:21.698330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.615 [2024-12-10 10:23:21.727617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.615 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.615 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:46.615 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:47.183 Nvme0n1 00:09:47.183 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:47.183 [ 00:09:47.183 { 00:09:47.183 "name": "Nvme0n1", 00:09:47.183 "aliases": [ 00:09:47.183 "e96a0685-21d7-49fc-bdf1-7eb2926e44e2" 00:09:47.183 ], 00:09:47.183 "product_name": "NVMe disk", 00:09:47.183 "block_size": 4096, 00:09:47.183 "num_blocks": 38912, 00:09:47.183 "uuid": "e96a0685-21d7-49fc-bdf1-7eb2926e44e2", 00:09:47.183 "numa_id": -1, 00:09:47.183 "assigned_rate_limits": { 00:09:47.183 "rw_ios_per_sec": 0, 00:09:47.183 "rw_mbytes_per_sec": 0, 00:09:47.183 "r_mbytes_per_sec": 0, 00:09:47.184 "w_mbytes_per_sec": 0 00:09:47.184 }, 00:09:47.184 "claimed": false, 00:09:47.184 "zoned": false, 00:09:47.184 "supported_io_types": { 00:09:47.184 "read": true, 00:09:47.184 "write": true, 00:09:47.184 "unmap": true, 00:09:47.184 "flush": true, 00:09:47.184 "reset": true, 00:09:47.184 "nvme_admin": true, 00:09:47.184 "nvme_io": true, 00:09:47.184 "nvme_io_md": false, 00:09:47.184 "write_zeroes": true, 00:09:47.184 "zcopy": false, 00:09:47.184 "get_zone_info": false, 00:09:47.184 "zone_management": false, 00:09:47.184 "zone_append": false, 00:09:47.184 "compare": true, 00:09:47.184 "compare_and_write": true, 00:09:47.184 "abort": true, 00:09:47.184 "seek_hole": false, 00:09:47.184 "seek_data": false, 00:09:47.184 "copy": true, 00:09:47.184 "nvme_iov_md": false 00:09:47.184 }, 00:09:47.184 "memory_domains": [ 00:09:47.184 { 00:09:47.184 "dma_device_id": "system", 00:09:47.184 "dma_device_type": 1 00:09:47.184 } 00:09:47.184 ], 00:09:47.184 "driver_specific": { 00:09:47.184 "nvme": [ 00:09:47.184 { 00:09:47.184 "trid": { 00:09:47.184 "trtype": "TCP", 00:09:47.184 "adrfam": "IPv4", 00:09:47.184 "traddr": "10.0.0.3", 00:09:47.184 "trsvcid": "4420", 00:09:47.184 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:47.184 }, 00:09:47.184 "ctrlr_data": { 00:09:47.184 "cntlid": 1, 00:09:47.184 "vendor_id": "0x8086", 00:09:47.184 "model_number": "SPDK bdev Controller", 00:09:47.184 "serial_number": "SPDK0", 00:09:47.184 "firmware_revision": "24.09.1", 00:09:47.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.184 "oacs": { 00:09:47.184 "security": 0, 00:09:47.184 "format": 0, 00:09:47.184 "firmware": 0, 00:09:47.184 "ns_manage": 0 00:09:47.184 }, 00:09:47.184 "multi_ctrlr": true, 00:09:47.184 "ana_reporting": false 00:09:47.184 }, 00:09:47.184 "vs": { 00:09:47.184 "nvme_version": "1.3" 00:09:47.184 }, 00:09:47.184 "ns_data": { 00:09:47.184 "id": 1, 00:09:47.184 "can_share": true 00:09:47.184 } 00:09:47.184 } 00:09:47.184 ], 00:09:47.184 "mp_policy": "active_passive" 00:09:47.184 } 00:09:47.184 } 00:09:47.184 ] 00:09:47.184 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76443 00:09:47.184 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.184 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:47.442 Running I/O for 10 seconds... 00:09:48.378 Latency(us) 00:09:48.378 [2024-12-10T10:23:23.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.378 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:48.378 [2024-12-10T10:23:23.605Z] =================================================================================================================== 00:09:48.378 [2024-12-10T10:23:23.605Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:48.378 00:09:49.316 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:49.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.316 Nvme0n1 : 2.00 6345.50 24.79 0.00 0.00 0.00 0.00 0.00 00:09:49.316 [2024-12-10T10:23:24.543Z] =================================================================================================================== 00:09:49.316 [2024-12-10T10:23:24.543Z] Total : 6345.50 24.79 0.00 0.00 0.00 0.00 0.00 00:09:49.316 00:09:49.577 true 00:09:49.577 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:49.577 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:49.835 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:49.835 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:49.835 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 76443 00:09:50.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.407 Nvme0n1 : 3.00 6389.33 24.96 0.00 0.00 0.00 0.00 0.00 00:09:50.407 [2024-12-10T10:23:25.634Z] =================================================================================================================== 00:09:50.407 [2024-12-10T10:23:25.634Z] Total : 6389.33 24.96 0.00 0.00 0.00 0.00 0.00 00:09:50.407 00:09:51.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.342 Nvme0n1 : 4.00 6443.00 25.17 0.00 0.00 0.00 0.00 0.00 00:09:51.342 [2024-12-10T10:23:26.569Z] =================================================================================================================== 00:09:51.342 [2024-12-10T10:23:26.569Z] Total : 6443.00 25.17 0.00 0.00 0.00 0.00 0.00 00:09:51.342 00:09:52.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.718 Nvme0n1 : 5.00 6475.20 25.29 0.00 0.00 0.00 0.00 0.00 00:09:52.718 [2024-12-10T10:23:27.945Z] =================================================================================================================== 00:09:52.718 [2024-12-10T10:23:27.945Z] Total : 6475.20 25.29 0.00 0.00 0.00 0.00 0.00 00:09:52.718 00:09:53.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.651 Nvme0n1 : 6.00 6496.67 25.38 0.00 0.00 0.00 0.00 0.00 00:09:53.651 [2024-12-10T10:23:28.878Z] =================================================================================================================== 00:09:53.651 [2024-12-10T10:23:28.878Z] Total : 6496.67 25.38 0.00 0.00 0.00 0.00 0.00 00:09:53.651 00:09:54.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.583 Nvme0n1 : 7.00 6512.00 25.44 0.00 0.00 0.00 0.00 0.00 00:09:54.583 [2024-12-10T10:23:29.810Z] =================================================================================================================== 00:09:54.583 [2024-12-10T10:23:29.810Z] Total : 6512.00 25.44 0.00 0.00 0.00 0.00 0.00 00:09:54.583 00:09:55.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.518 Nvme0n1 : 8.00 6491.75 25.36 0.00 0.00 0.00 0.00 0.00 00:09:55.518 [2024-12-10T10:23:30.745Z] =================================================================================================================== 00:09:55.518 [2024-12-10T10:23:30.745Z] Total : 6491.75 25.36 0.00 0.00 0.00 0.00 0.00 00:09:55.518 00:09:56.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.452 Nvme0n1 : 9.00 6422.44 25.09 0.00 0.00 0.00 0.00 0.00 00:09:56.452 [2024-12-10T10:23:31.679Z] =================================================================================================================== 00:09:56.452 [2024-12-10T10:23:31.679Z] Total : 6422.44 25.09 0.00 0.00 0.00 0.00 0.00 00:09:56.452 00:09:57.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.388 Nvme0n1 : 10.00 6415.20 25.06 0.00 0.00 0.00 0.00 0.00 00:09:57.388 [2024-12-10T10:23:32.615Z] =================================================================================================================== 00:09:57.388 [2024-12-10T10:23:32.615Z] Total : 6415.20 25.06 0.00 0.00 0.00 0.00 0.00 00:09:57.388 00:09:57.388 00:09:57.388 Latency(us) 00:09:57.388 [2024-12-10T10:23:32.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.388 Nvme0n1 : 10.01 6419.39 25.08 0.00 0.00 19933.34 6940.86 133455.13 00:09:57.388 [2024-12-10T10:23:32.615Z] =================================================================================================================== 00:09:57.388 [2024-12-10T10:23:32.615Z] Total : 6419.39 25.08 0.00 0.00 19933.34 6940.86 133455.13 00:09:57.388 { 00:09:57.388 "results": [ 00:09:57.388 { 00:09:57.388 "job": "Nvme0n1", 00:09:57.388 "core_mask": "0x2", 00:09:57.388 "workload": "randwrite", 00:09:57.388 "status": "finished", 00:09:57.388 "queue_depth": 128, 00:09:57.388 "io_size": 4096, 00:09:57.388 "runtime": 10.013411, 00:09:57.388 "iops": 6419.3909547905305, 00:09:57.388 "mibps": 25.07574591715051, 00:09:57.388 "io_failed": 0, 00:09:57.388 "io_timeout": 0, 00:09:57.388 "avg_latency_us": 19933.33959563274, 00:09:57.388 "min_latency_us": 6940.858181818182, 00:09:57.388 "max_latency_us": 133455.12727272726 00:09:57.388 } 00:09:57.388 ], 00:09:57.388 "core_count": 1 00:09:57.388 } 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76432 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 76432 ']' 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 76432 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76432 00:09:57.388 killing process with pid 76432 00:09:57.388 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.388 00:09:57.388 Latency(us) 00:09:57.388 [2024-12-10T10:23:32.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.388 [2024-12-10T10:23:32.615Z] =================================================================================================================== 00:09:57.388 [2024-12-10T10:23:32.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76432' 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 76432 00:09:57.388 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 76432 00:09:57.647 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:57.905 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.163 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:58.163 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 76082 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 76082 00:09:58.422 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 76082 Killed "${NVMF_APP[@]}" "$@" 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=76583 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 76583 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 76583 ']' 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.422 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.681 [2024-12-10 10:23:33.668021] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:58.681 [2024-12-10 10:23:33.668145] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.681 [2024-12-10 10:23:33.806700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.681 [2024-12-10 10:23:33.843917] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.681 [2024-12-10 10:23:33.844003] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.681 [2024-12-10 10:23:33.844030] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.681 [2024-12-10 10:23:33.844038] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.681 [2024-12-10 10:23:33.844044] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.681 [2024-12-10 10:23:33.844076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.681 [2024-12-10 10:23:33.874812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.939 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:59.198 [2024-12-10 10:23:34.257700] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:59.198 [2024-12-10 10:23:34.257973] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:59.198 [2024-12-10 10:23:34.258141] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.198 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:59.456 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e96a0685-21d7-49fc-bdf1-7eb2926e44e2 -t 2000 00:09:59.715 [ 00:09:59.715 { 00:09:59.715 "name": "e96a0685-21d7-49fc-bdf1-7eb2926e44e2", 00:09:59.715 "aliases": [ 00:09:59.715 "lvs/lvol" 00:09:59.715 ], 00:09:59.715 "product_name": "Logical Volume", 00:09:59.715 "block_size": 4096, 00:09:59.715 "num_blocks": 38912, 00:09:59.715 "uuid": "e96a0685-21d7-49fc-bdf1-7eb2926e44e2", 00:09:59.715 "assigned_rate_limits": { 00:09:59.715 "rw_ios_per_sec": 0, 00:09:59.715 "rw_mbytes_per_sec": 0, 00:09:59.715 "r_mbytes_per_sec": 0, 00:09:59.715 "w_mbytes_per_sec": 0 00:09:59.715 }, 00:09:59.715 "claimed": false, 00:09:59.715 "zoned": false, 00:09:59.715 "supported_io_types": { 00:09:59.715 "read": true, 00:09:59.715 "write": true, 00:09:59.715 "unmap": true, 00:09:59.715 "flush": false, 00:09:59.715 "reset": true, 00:09:59.715 "nvme_admin": false, 00:09:59.715 "nvme_io": false, 00:09:59.715 "nvme_io_md": false, 00:09:59.715 "write_zeroes": true, 00:09:59.715 "zcopy": false, 00:09:59.715 "get_zone_info": false, 00:09:59.715 "zone_management": false, 00:09:59.715 "zone_append": false, 00:09:59.715 "compare": false, 00:09:59.715 "compare_and_write": false, 00:09:59.715 "abort": false, 00:09:59.715 "seek_hole": true, 00:09:59.715 "seek_data": true, 00:09:59.715 "copy": false, 00:09:59.715 "nvme_iov_md": false 00:09:59.715 }, 00:09:59.715 "driver_specific": { 00:09:59.715 "lvol": { 00:09:59.715 "lvol_store_uuid": "0a3acf2e-7232-48a4-a969-5da52302e0ea", 00:09:59.715 "base_bdev": "aio_bdev", 00:09:59.715 "thin_provision": false, 00:09:59.715 "num_allocated_clusters": 38, 00:09:59.715 "snapshot": false, 00:09:59.715 "clone": false, 00:09:59.715 "esnap_clone": false 00:09:59.715 } 00:09:59.715 } 00:09:59.715 } 00:09:59.715 ] 00:09:59.716 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:59.716 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:59.716 10:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:59.974 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:59.974 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:09:59.974 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:00.231 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:00.231 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:00.491 [2024-12-10 10:23:35.571973] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:00.491 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:10:00.774 request: 00:10:00.774 { 00:10:00.774 "uuid": "0a3acf2e-7232-48a4-a969-5da52302e0ea", 00:10:00.774 "method": "bdev_lvol_get_lvstores", 00:10:00.774 "req_id": 1 00:10:00.774 } 00:10:00.774 Got JSON-RPC error response 00:10:00.774 response: 00:10:00.774 { 00:10:00.774 "code": -19, 00:10:00.774 "message": "No such device" 00:10:00.774 } 00:10:00.774 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:00.774 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:00.774 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:00.774 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:00.774 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.039 aio_bdev 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.039 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.297 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e96a0685-21d7-49fc-bdf1-7eb2926e44e2 -t 2000 00:10:01.555 [ 00:10:01.555 { 00:10:01.555 "name": "e96a0685-21d7-49fc-bdf1-7eb2926e44e2", 00:10:01.555 "aliases": [ 00:10:01.555 "lvs/lvol" 00:10:01.555 ], 00:10:01.555 "product_name": "Logical Volume", 00:10:01.555 "block_size": 4096, 00:10:01.555 "num_blocks": 38912, 00:10:01.555 "uuid": "e96a0685-21d7-49fc-bdf1-7eb2926e44e2", 00:10:01.555 "assigned_rate_limits": { 00:10:01.555 "rw_ios_per_sec": 0, 00:10:01.555 "rw_mbytes_per_sec": 0, 00:10:01.555 "r_mbytes_per_sec": 0, 00:10:01.555 "w_mbytes_per_sec": 0 00:10:01.555 }, 00:10:01.555 "claimed": false, 00:10:01.555 "zoned": false, 00:10:01.555 "supported_io_types": { 00:10:01.555 "read": true, 00:10:01.555 "write": true, 00:10:01.555 "unmap": true, 00:10:01.555 "flush": false, 00:10:01.555 "reset": true, 00:10:01.555 "nvme_admin": false, 00:10:01.555 "nvme_io": false, 00:10:01.555 "nvme_io_md": false, 00:10:01.555 "write_zeroes": true, 00:10:01.555 "zcopy": false, 00:10:01.555 "get_zone_info": false, 00:10:01.555 "zone_management": false, 00:10:01.555 "zone_append": false, 00:10:01.555 "compare": false, 00:10:01.555 "compare_and_write": false, 00:10:01.555 "abort": false, 00:10:01.555 "seek_hole": true, 00:10:01.555 "seek_data": true, 00:10:01.555 "copy": false, 00:10:01.555 "nvme_iov_md": false 00:10:01.555 }, 00:10:01.555 "driver_specific": { 00:10:01.555 "lvol": { 00:10:01.556 "lvol_store_uuid": "0a3acf2e-7232-48a4-a969-5da52302e0ea", 00:10:01.556 "base_bdev": "aio_bdev", 00:10:01.556 "thin_provision": false, 00:10:01.556 "num_allocated_clusters": 38, 00:10:01.556 "snapshot": false, 00:10:01.556 "clone": false, 00:10:01.556 "esnap_clone": false 00:10:01.556 } 00:10:01.556 } 00:10:01.556 } 00:10:01.556 ] 00:10:01.556 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:01.556 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:10:01.556 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:01.814 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:01.814 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:01.814 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:10:02.072 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:02.072 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e96a0685-21d7-49fc-bdf1-7eb2926e44e2 00:10:02.330 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a3acf2e-7232-48a4-a969-5da52302e0ea 00:10:02.589 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.847 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.106 00:10:03.106 real 0m19.299s 00:10:03.106 user 0m39.785s 00:10:03.106 sys 0m8.964s 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.106 ************************************ 00:10:03.106 END TEST lvs_grow_dirty 00:10:03.106 ************************************ 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:03.106 nvmf_trace.0 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:03.106 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.673 rmmod nvme_tcp 00:10:03.673 rmmod nvme_fabrics 00:10:03.673 rmmod nvme_keyring 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 76583 ']' 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 76583 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 76583 ']' 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 76583 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.673 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76583 00:10:03.932 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.932 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.932 killing process with pid 76583 00:10:03.932 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76583' 00:10:03.932 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 76583 00:10:03.932 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 76583 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:03.932 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:04.191 00:10:04.191 real 0m40.034s 00:10:04.191 user 1m2.597s 00:10:04.191 sys 0m12.420s 00:10:04.191 ************************************ 00:10:04.191 END TEST nvmf_lvs_grow 00:10:04.191 ************************************ 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.191 ************************************ 00:10:04.191 START TEST nvmf_bdev_io_wait 00:10:04.191 ************************************ 00:10:04.191 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.451 * Looking for test storage... 00:10:04.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:04.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.451 --rc genhtml_branch_coverage=1 00:10:04.451 --rc genhtml_function_coverage=1 00:10:04.451 --rc genhtml_legend=1 00:10:04.451 --rc geninfo_all_blocks=1 00:10:04.451 --rc geninfo_unexecuted_blocks=1 00:10:04.451 00:10:04.451 ' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:04.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.451 --rc genhtml_branch_coverage=1 00:10:04.451 --rc genhtml_function_coverage=1 00:10:04.451 --rc genhtml_legend=1 00:10:04.451 --rc geninfo_all_blocks=1 00:10:04.451 --rc geninfo_unexecuted_blocks=1 00:10:04.451 00:10:04.451 ' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:04.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.451 --rc genhtml_branch_coverage=1 00:10:04.451 --rc genhtml_function_coverage=1 00:10:04.451 --rc genhtml_legend=1 00:10:04.451 --rc geninfo_all_blocks=1 00:10:04.451 --rc geninfo_unexecuted_blocks=1 00:10:04.451 00:10:04.451 ' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:04.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.451 --rc genhtml_branch_coverage=1 00:10:04.451 --rc genhtml_function_coverage=1 00:10:04.451 --rc genhtml_legend=1 00:10:04.451 --rc geninfo_all_blocks=1 00:10:04.451 --rc geninfo_unexecuted_blocks=1 00:10:04.451 00:10:04.451 ' 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.451 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:04.452 Cannot find device "nvmf_init_br" 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:04.452 Cannot find device "nvmf_init_br2" 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:04.452 Cannot find device "nvmf_tgt_br" 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.452 Cannot find device "nvmf_tgt_br2" 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:04.452 Cannot find device "nvmf_init_br" 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:04.452 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:04.711 Cannot find device "nvmf_init_br2" 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:04.711 Cannot find device "nvmf_tgt_br" 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:04.711 Cannot find device "nvmf_tgt_br2" 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:04.711 Cannot find device "nvmf_br" 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:04.711 Cannot find device "nvmf_init_if" 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:04.711 Cannot find device "nvmf_init_if2" 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.711 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:04.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:04.970 00:10:04.970 --- 10.0.0.3 ping statistics --- 00:10:04.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.970 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:04.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:04.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:04.970 00:10:04.970 --- 10.0.0.4 ping statistics --- 00:10:04.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.970 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:04.970 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:04.970 00:10:04.970 --- 10.0.0.1 ping statistics --- 00:10:04.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.970 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:04.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:04.970 00:10:04.970 --- 10.0.0.2 ping statistics --- 00:10:04.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.970 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=76943 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 76943 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 76943 ']' 00:10:04.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.970 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.970 [2024-12-10 10:23:40.086353] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:04.970 [2024-12-10 10:23:40.086465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.229 [2024-12-10 10:23:40.224715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.229 [2024-12-10 10:23:40.269204] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.229 [2024-12-10 10:23:40.269506] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.229 [2024-12-10 10:23:40.269673] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.229 [2024-12-10 10:23:40.269824] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.229 [2024-12-10 10:23:40.269873] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.229 [2024-12-10 10:23:40.270127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.229 [2024-12-10 10:23:40.270243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.229 [2024-12-10 10:23:40.270331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.229 [2024-12-10 10:23:40.270332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.229 [2024-12-10 10:23:40.441096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.229 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.488 [2024-12-10 10:23:40.456610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.488 Malloc0 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.488 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.489 [2024-12-10 10:23:40.520449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76970 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76972 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76974 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:05.489 { 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme$subsystem", 00:10:05.489 "trtype": "$TEST_TRANSPORT", 00:10:05.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "$NVMF_PORT", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.489 "hdgst": ${hdgst:-false}, 00:10:05.489 "ddgst": ${ddgst:-false} 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 } 00:10:05.489 EOF 00:10:05.489 )") 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76975 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:05.489 { 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme$subsystem", 00:10:05.489 "trtype": "$TEST_TRANSPORT", 00:10:05.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "$NVMF_PORT", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.489 "hdgst": ${hdgst:-false}, 00:10:05.489 "ddgst": ${ddgst:-false} 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 } 00:10:05.489 EOF 00:10:05.489 )") 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:05.489 { 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme$subsystem", 00:10:05.489 "trtype": "$TEST_TRANSPORT", 00:10:05.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "$NVMF_PORT", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.489 "hdgst": ${hdgst:-false}, 00:10:05.489 "ddgst": ${ddgst:-false} 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 } 00:10:05.489 EOF 00:10:05.489 )") 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme1", 00:10:05.489 "trtype": "tcp", 00:10:05.489 "traddr": "10.0.0.3", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "4420", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.489 "hdgst": false, 00:10:05.489 "ddgst": false 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 }' 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:05.489 { 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme$subsystem", 00:10:05.489 "trtype": "$TEST_TRANSPORT", 00:10:05.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "$NVMF_PORT", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.489 "hdgst": ${hdgst:-false}, 00:10:05.489 "ddgst": ${ddgst:-false} 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 } 00:10:05.489 EOF 00:10:05.489 )") 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme1", 00:10:05.489 "trtype": "tcp", 00:10:05.489 "traddr": "10.0.0.3", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "4420", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.489 "hdgst": false, 00:10:05.489 "ddgst": false 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 }' 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme1", 00:10:05.489 "trtype": "tcp", 00:10:05.489 "traddr": "10.0.0.3", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "4420", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.489 "hdgst": false, 00:10:05.489 "ddgst": false 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 }' 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:05.489 "params": { 00:10:05.489 "name": "Nvme1", 00:10:05.489 "trtype": "tcp", 00:10:05.489 "traddr": "10.0.0.3", 00:10:05.489 "adrfam": "ipv4", 00:10:05.489 "trsvcid": "4420", 00:10:05.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.489 "hdgst": false, 00:10:05.489 "ddgst": false 00:10:05.489 }, 00:10:05.489 "method": "bdev_nvme_attach_controller" 00:10:05.489 }' 00:10:05.489 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76970 00:10:05.489 [2024-12-10 10:23:40.590642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.489 [2024-12-10 10:23:40.591470] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:05.489 [2024-12-10 10:23:40.602798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.489 [2024-12-10 10:23:40.603026] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:05.489 [2024-12-10 10:23:40.631980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.489 [2024-12-10 10:23:40.633007] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:05.489 [2024-12-10 10:23:40.643941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.489 [2024-12-10 10:23:40.644356] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:05.748 [2024-12-10 10:23:40.772635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.748 [2024-12-10 10:23:40.803493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:05.748 [2024-12-10 10:23:40.814537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.748 [2024-12-10 10:23:40.836030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.748 [2024-12-10 10:23:40.842162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:05.748 [2024-12-10 10:23:40.855608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.748 [2024-12-10 10:23:40.880958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.748 [2024-12-10 10:23:40.883532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:05.748 [2024-12-10 10:23:40.913196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.748 [2024-12-10 10:23:40.927306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.748 Running I/O for 1 seconds... 00:10:05.748 [2024-12-10 10:23:40.947196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:06.007 [2024-12-10 10:23:40.987807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.007 Running I/O for 1 seconds... 00:10:06.007 Running I/O for 1 seconds... 00:10:06.007 Running I/O for 1 seconds... 00:10:06.942 162968.00 IOPS, 636.59 MiB/s 00:10:06.942 Latency(us) 00:10:06.942 [2024-12-10T10:23:42.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.942 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:06.942 Nvme1n1 : 1.00 162585.79 635.10 0.00 0.00 782.83 422.63 2338.44 00:10:06.942 [2024-12-10T10:23:42.169Z] =================================================================================================================== 00:10:06.942 [2024-12-10T10:23:42.169Z] Total : 162585.79 635.10 0.00 0.00 782.83 422.63 2338.44 00:10:06.942 9079.00 IOPS, 35.46 MiB/s 00:10:06.942 Latency(us) 00:10:06.942 [2024-12-10T10:23:42.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.942 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:06.942 Nvme1n1 : 1.01 9126.00 35.65 0.00 0.00 13960.97 7417.48 22997.18 00:10:06.942 [2024-12-10T10:23:42.169Z] =================================================================================================================== 00:10:06.942 [2024-12-10T10:23:42.169Z] Total : 9126.00 35.65 0.00 0.00 13960.97 7417.48 22997.18 00:10:06.942 8793.00 IOPS, 34.35 MiB/s 00:10:06.942 Latency(us) 00:10:06.942 [2024-12-10T10:23:42.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.942 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:06.942 Nvme1n1 : 1.01 8862.99 34.62 0.00 0.00 14378.08 7238.75 24427.05 00:10:06.942 [2024-12-10T10:23:42.169Z] =================================================================================================================== 00:10:06.942 [2024-12-10T10:23:42.169Z] Total : 8862.99 34.62 0.00 0.00 14378.08 7238.75 24427.05 00:10:06.942 8060.00 IOPS, 31.48 MiB/s 00:10:06.942 Latency(us) 00:10:06.942 [2024-12-10T10:23:42.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.942 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:06.942 Nvme1n1 : 1.01 8129.99 31.76 0.00 0.00 15669.13 4647.10 25499.46 00:10:06.942 [2024-12-10T10:23:42.169Z] =================================================================================================================== 00:10:06.942 [2024-12-10T10:23:42.169Z] Total : 8129.99 31.76 0.00 0.00 15669.13 4647.10 25499.46 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76972 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76974 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76975 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.202 rmmod nvme_tcp 00:10:07.202 rmmod nvme_fabrics 00:10:07.202 rmmod nvme_keyring 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 76943 ']' 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 76943 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 76943 ']' 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 76943 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76943 00:10:07.202 killing process with pid 76943 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76943' 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 76943 00:10:07.202 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 76943 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:07.461 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:07.720 00:10:07.720 real 0m3.375s 00:10:07.720 user 0m13.146s 00:10:07.720 sys 0m2.150s 00:10:07.720 ************************************ 00:10:07.720 END TEST nvmf_bdev_io_wait 00:10:07.720 ************************************ 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.720 ************************************ 00:10:07.720 START TEST nvmf_queue_depth 00:10:07.720 ************************************ 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.720 * Looking for test storage... 00:10:07.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.720 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.980 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.981 --rc genhtml_branch_coverage=1 00:10:07.981 --rc genhtml_function_coverage=1 00:10:07.981 --rc genhtml_legend=1 00:10:07.981 --rc geninfo_all_blocks=1 00:10:07.981 --rc geninfo_unexecuted_blocks=1 00:10:07.981 00:10:07.981 ' 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.981 --rc genhtml_branch_coverage=1 00:10:07.981 --rc genhtml_function_coverage=1 00:10:07.981 --rc genhtml_legend=1 00:10:07.981 --rc geninfo_all_blocks=1 00:10:07.981 --rc geninfo_unexecuted_blocks=1 00:10:07.981 00:10:07.981 ' 00:10:07.981 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.981 --rc genhtml_branch_coverage=1 00:10:07.981 --rc genhtml_function_coverage=1 00:10:07.981 --rc genhtml_legend=1 00:10:07.981 --rc geninfo_all_blocks=1 00:10:07.981 --rc geninfo_unexecuted_blocks=1 00:10:07.981 00:10:07.981 ' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.981 --rc genhtml_branch_coverage=1 00:10:07.981 --rc genhtml_function_coverage=1 00:10:07.981 --rc genhtml_legend=1 00:10:07.981 --rc geninfo_all_blocks=1 00:10:07.981 --rc geninfo_unexecuted_blocks=1 00:10:07.981 00:10:07.981 ' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:07.981 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:07.982 Cannot find device "nvmf_init_br" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:07.982 Cannot find device "nvmf_init_br2" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:07.982 Cannot find device "nvmf_tgt_br" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.982 Cannot find device "nvmf_tgt_br2" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:07.982 Cannot find device "nvmf_init_br" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:07.982 Cannot find device "nvmf_init_br2" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:07.982 Cannot find device "nvmf_tgt_br" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:07.982 Cannot find device "nvmf_tgt_br2" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:07.982 Cannot find device "nvmf_br" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:07.982 Cannot find device "nvmf_init_if" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:07.982 Cannot find device "nvmf_init_if2" 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:07.982 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.241 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:08.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:08.242 00:10:08.242 --- 10.0.0.3 ping statistics --- 00:10:08.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.242 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:08.242 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:08.242 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:10:08.242 00:10:08.242 --- 10.0.0.4 ping statistics --- 00:10:08.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.242 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:08.242 00:10:08.242 --- 10.0.0.1 ping statistics --- 00:10:08.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.242 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:08.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:08.242 00:10:08.242 --- 10.0.0.2 ping statistics --- 00:10:08.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.242 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:08.242 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=77230 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 77230 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 77230 ']' 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.502 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.502 [2024-12-10 10:23:43.531221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.502 [2024-12-10 10:23:43.531299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.502 [2024-12-10 10:23:43.670835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.502 [2024-12-10 10:23:43.715451] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.502 [2024-12-10 10:23:43.715504] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.502 [2024-12-10 10:23:43.715517] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.502 [2024-12-10 10:23:43.715527] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.502 [2024-12-10 10:23:43.715535] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.502 [2024-12-10 10:23:43.715567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.761 [2024-12-10 10:23:43.749627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.761 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.761 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:08.761 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:08.761 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.761 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.761 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 [2024-12-10 10:23:43.840140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 Malloc0 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 [2024-12-10 10:23:43.897917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=77260 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 77260 /var/tmp/bdevperf.sock 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 77260 ']' 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.762 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 [2024-12-10 10:23:43.950921] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.762 [2024-12-10 10:23:43.951005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77260 ] 00:10:09.020 [2024-12-10 10:23:44.080347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.020 [2024-12-10 10:23:44.120197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.020 [2024-12-10 10:23:44.153549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.020 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.020 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:09.020 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:09.020 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.020 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.279 NVMe0n1 00:10:09.279 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.279 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:09.279 Running I/O for 10 seconds... 00:10:11.600 7188.00 IOPS, 28.08 MiB/s [2024-12-10T10:23:47.763Z] 7762.50 IOPS, 30.32 MiB/s [2024-12-10T10:23:48.697Z] 8089.00 IOPS, 31.60 MiB/s [2024-12-10T10:23:49.633Z] 8279.25 IOPS, 32.34 MiB/s [2024-12-10T10:23:50.569Z] 8484.20 IOPS, 33.14 MiB/s [2024-12-10T10:23:51.506Z] 8595.50 IOPS, 33.58 MiB/s [2024-12-10T10:23:52.442Z] 8796.57 IOPS, 34.36 MiB/s [2024-12-10T10:23:53.820Z] 8940.00 IOPS, 34.92 MiB/s [2024-12-10T10:23:54.757Z] 9039.33 IOPS, 35.31 MiB/s [2024-12-10T10:23:54.757Z] 9118.30 IOPS, 35.62 MiB/s 00:10:19.530 Latency(us) 00:10:19.530 [2024-12-10T10:23:54.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.530 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:19.530 Verification LBA range: start 0x0 length 0x4000 00:10:19.530 NVMe0n1 : 10.08 9147.25 35.73 0.00 0.00 111460.58 24546.21 91512.09 00:10:19.530 [2024-12-10T10:23:54.757Z] =================================================================================================================== 00:10:19.530 [2024-12-10T10:23:54.757Z] Total : 9147.25 35.73 0.00 0.00 111460.58 24546.21 91512.09 00:10:19.530 { 00:10:19.530 "results": [ 00:10:19.530 { 00:10:19.530 "job": "NVMe0n1", 00:10:19.530 "core_mask": "0x1", 00:10:19.530 "workload": "verify", 00:10:19.530 "status": "finished", 00:10:19.530 "verify_range": { 00:10:19.530 "start": 0, 00:10:19.530 "length": 16384 00:10:19.530 }, 00:10:19.530 "queue_depth": 1024, 00:10:19.530 "io_size": 4096, 00:10:19.530 "runtime": 10.07855, 00:10:19.530 "iops": 9147.248364099994, 00:10:19.530 "mibps": 35.7314389222656, 00:10:19.530 "io_failed": 0, 00:10:19.530 "io_timeout": 0, 00:10:19.530 "avg_latency_us": 111460.58169243498, 00:10:19.530 "min_latency_us": 24546.21090909091, 00:10:19.530 "max_latency_us": 91512.08727272728 00:10:19.530 } 00:10:19.530 ], 00:10:19.530 "core_count": 1 00:10:19.530 } 00:10:19.530 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 77260 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 77260 ']' 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 77260 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77260 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.531 killing process with pid 77260 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77260' 00:10:19.531 Received shutdown signal, test time was about 10.000000 seconds 00:10:19.531 00:10:19.531 Latency(us) 00:10:19.531 [2024-12-10T10:23:54.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.531 [2024-12-10T10:23:54.758Z] =================================================================================================================== 00:10:19.531 [2024-12-10T10:23:54.758Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 77260 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 77260 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.531 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.531 rmmod nvme_tcp 00:10:19.531 rmmod nvme_fabrics 00:10:19.531 rmmod nvme_keyring 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 77230 ']' 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 77230 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 77230 ']' 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 77230 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77230 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:19.790 killing process with pid 77230 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77230' 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 77230 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 77230 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:19.790 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:19.791 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:19.791 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:19.791 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.791 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:19.791 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:19.791 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:20.049 00:10:20.049 real 0m12.407s 00:10:20.049 user 0m21.100s 00:10:20.049 sys 0m2.094s 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.049 ************************************ 00:10:20.049 END TEST nvmf_queue_depth 00:10:20.049 ************************************ 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.049 10:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.307 ************************************ 00:10:20.307 START TEST nvmf_target_multipath 00:10:20.307 ************************************ 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.307 * Looking for test storage... 00:10:20.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.307 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:20.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.308 --rc genhtml_branch_coverage=1 00:10:20.308 --rc genhtml_function_coverage=1 00:10:20.308 --rc genhtml_legend=1 00:10:20.308 --rc geninfo_all_blocks=1 00:10:20.308 --rc geninfo_unexecuted_blocks=1 00:10:20.308 00:10:20.308 ' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:20.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.308 --rc genhtml_branch_coverage=1 00:10:20.308 --rc genhtml_function_coverage=1 00:10:20.308 --rc genhtml_legend=1 00:10:20.308 --rc geninfo_all_blocks=1 00:10:20.308 --rc geninfo_unexecuted_blocks=1 00:10:20.308 00:10:20.308 ' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:20.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.308 --rc genhtml_branch_coverage=1 00:10:20.308 --rc genhtml_function_coverage=1 00:10:20.308 --rc genhtml_legend=1 00:10:20.308 --rc geninfo_all_blocks=1 00:10:20.308 --rc geninfo_unexecuted_blocks=1 00:10:20.308 00:10:20.308 ' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:20.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.308 --rc genhtml_branch_coverage=1 00:10:20.308 --rc genhtml_function_coverage=1 00:10:20.308 --rc genhtml_legend=1 00:10:20.308 --rc geninfo_all_blocks=1 00:10:20.308 --rc geninfo_unexecuted_blocks=1 00:10:20.308 00:10:20.308 ' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.308 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.309 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:20.309 Cannot find device "nvmf_init_br" 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:20.309 Cannot find device "nvmf_init_br2" 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:20.309 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:20.568 Cannot find device "nvmf_tgt_br" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.568 Cannot find device "nvmf_tgt_br2" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:20.568 Cannot find device "nvmf_init_br" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:20.568 Cannot find device "nvmf_init_br2" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:20.568 Cannot find device "nvmf_tgt_br" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:20.568 Cannot find device "nvmf_tgt_br2" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:20.568 Cannot find device "nvmf_br" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:20.568 Cannot find device "nvmf_init_if" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:20.568 Cannot find device "nvmf_init_if2" 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.568 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.569 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:20.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:10:20.829 00:10:20.829 --- 10.0.0.3 ping statistics --- 00:10:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.829 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:20.829 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:20.829 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:10:20.829 00:10:20.829 --- 10.0.0.4 ping statistics --- 00:10:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.829 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:20.829 00:10:20.829 --- 10.0.0.1 ping statistics --- 00:10:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.829 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:20.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:20.829 00:10:20.829 --- 10.0.0.2 ping statistics --- 00:10:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.829 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=77622 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 77622 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 77622 ']' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.829 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:20.829 [2024-12-10 10:23:55.985351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:20.829 [2024-12-10 10:23:55.985488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.089 [2024-12-10 10:23:56.127756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.089 [2024-12-10 10:23:56.171994] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.089 [2024-12-10 10:23:56.172561] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.089 [2024-12-10 10:23:56.172838] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.089 [2024-12-10 10:23:56.173074] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.089 [2024-12-10 10:23:56.173154] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.089 [2024-12-10 10:23:56.173367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.089 [2024-12-10 10:23:56.173577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.089 [2024-12-10 10:23:56.174189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.089 [2024-12-10 10:23:56.174205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.089 [2024-12-10 10:23:56.207756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.033 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.033 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:22.033 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:22.033 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.033 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.033 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.033 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.292 [2024-12-10 10:23:57.265542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.292 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:22.551 Malloc0 00:10:22.551 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:22.809 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.066 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:23.324 [2024-12-10 10:23:58.369142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:23.324 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:23.582 [2024-12-10 10:23:58.677439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:23.583 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:23.842 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:23.842 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.842 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.842 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.842 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.842 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.745 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.745 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.745 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:26.004 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=77717 00:10:26.005 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:26.005 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:26.005 [global] 00:10:26.005 thread=1 00:10:26.005 invalidate=1 00:10:26.005 rw=randrw 00:10:26.005 time_based=1 00:10:26.005 runtime=6 00:10:26.005 ioengine=libaio 00:10:26.005 direct=1 00:10:26.005 bs=4096 00:10:26.005 iodepth=128 00:10:26.005 norandommap=0 00:10:26.005 numjobs=1 00:10:26.005 00:10:26.005 verify_dump=1 00:10:26.005 verify_backlog=512 00:10:26.005 verify_state_save=0 00:10:26.005 do_verify=1 00:10:26.005 verify=crc32c-intel 00:10:26.005 [job0] 00:10:26.005 filename=/dev/nvme0n1 00:10:26.005 Could not set queue depth (nvme0n1) 00:10:26.005 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.005 fio-3.35 00:10:26.005 Starting 1 thread 00:10:26.940 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:27.199 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:27.457 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:27.716 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:28.283 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 77717 00:10:32.473 00:10:32.473 job0: (groupid=0, jobs=1): err= 0: pid=77744: Tue Dec 10 10:24:07 2024 00:10:32.473 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(238MiB/6007msec) 00:10:32.473 slat (usec): min=2, max=7738, avg=58.60, stdev=242.30 00:10:32.473 clat (usec): min=1577, max=16980, avg=8594.89, stdev=1581.71 00:10:32.473 lat (usec): min=1586, max=16991, avg=8653.49, stdev=1586.14 00:10:32.473 clat percentiles (usec): 00:10:32.473 | 1.00th=[ 4490], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7701], 00:10:32.473 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:10:32.473 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[12125], 00:10:32.473 | 99.00th=[13566], 99.50th=[13960], 99.90th=[15139], 99.95th=[15401], 00:10:32.473 | 99.99th=[16909] 00:10:32.473 bw ( KiB/s): min= 9920, max=25720, per=51.63%, avg=20928.91, stdev=5518.11, samples=11 00:10:32.473 iops : min= 2480, max= 6430, avg=5232.18, stdev=1379.51, samples=11 00:10:32.473 write: IOPS=5892, BW=23.0MiB/s (24.1MB/s)(125MiB/5436msec); 0 zone resets 00:10:32.473 slat (usec): min=4, max=3056, avg=66.32, stdev=166.87 00:10:32.473 clat (usec): min=2562, max=15250, avg=7454.48, stdev=1344.77 00:10:32.473 lat (usec): min=2586, max=15255, avg=7520.80, stdev=1349.38 00:10:32.473 clat percentiles (usec): 00:10:32.473 | 1.00th=[ 3392], 5.00th=[ 4490], 10.00th=[ 5932], 20.00th=[ 6915], 00:10:32.473 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:10:32.473 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8979], 00:10:32.473 | 99.00th=[11469], 99.50th=[12256], 99.90th=[13042], 99.95th=[13960], 00:10:32.473 | 99.99th=[15008] 00:10:32.473 bw ( KiB/s): min=10120, max=25336, per=88.94%, avg=20961.55, stdev=5390.87, samples=11 00:10:32.473 iops : min= 2530, max= 6334, avg=5240.36, stdev=1347.71, samples=11 00:10:32.473 lat (msec) : 2=0.01%, 4=1.44%, 10=90.39%, 20=8.16% 00:10:32.473 cpu : usr=5.24%, sys=20.70%, ctx=5285, majf=0, minf=66 00:10:32.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:32.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.473 issued rwts: total=60868,32029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.473 00:10:32.473 Run status group 0 (all jobs): 00:10:32.473 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=238MiB (249MB), run=6007-6007msec 00:10:32.473 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=125MiB (131MB), run=5436-5436msec 00:10:32.473 00:10:32.473 Disk stats (read/write): 00:10:32.473 nvme0n1: ios=59996/31388, merge=0/0, ticks=493446/219567, in_queue=713013, util=98.70% 00:10:32.474 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:32.474 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=77819 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:32.749 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:32.749 [global] 00:10:32.749 thread=1 00:10:32.749 invalidate=1 00:10:32.749 rw=randrw 00:10:32.749 time_based=1 00:10:32.749 runtime=6 00:10:32.749 ioengine=libaio 00:10:32.749 direct=1 00:10:32.749 bs=4096 00:10:32.749 iodepth=128 00:10:32.749 norandommap=0 00:10:32.749 numjobs=1 00:10:32.749 00:10:32.749 verify_dump=1 00:10:32.749 verify_backlog=512 00:10:32.749 verify_state_save=0 00:10:32.749 do_verify=1 00:10:32.749 verify=crc32c-intel 00:10:32.749 [job0] 00:10:32.749 filename=/dev/nvme0n1 00:10:33.008 Could not set queue depth (nvme0n1) 00:10:33.008 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.008 fio-3.35 00:10:33.008 Starting 1 thread 00:10:33.946 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:34.206 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.465 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:34.725 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.984 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 77819 00:10:39.178 00:10:39.178 job0: (groupid=0, jobs=1): err= 0: pid=77840: Tue Dec 10 10:24:14 2024 00:10:39.178 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(267MiB/6003msec) 00:10:39.178 slat (usec): min=2, max=6130, avg=43.93, stdev=194.89 00:10:39.178 clat (usec): min=334, max=18073, avg=7735.22, stdev=2090.43 00:10:39.178 lat (usec): min=352, max=18133, avg=7779.15, stdev=2106.41 00:10:39.178 clat percentiles (usec): 00:10:39.178 | 1.00th=[ 2540], 5.00th=[ 3949], 10.00th=[ 4752], 20.00th=[ 5997], 00:10:39.178 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8356], 00:10:39.178 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11076], 00:10:39.178 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14615], 99.95th=[15401], 00:10:39.178 | 99.99th=[17433] 00:10:39.178 bw ( KiB/s): min= 7208, max=40784, per=52.35%, avg=23801.45, stdev=9599.78, samples=11 00:10:39.178 iops : min= 1802, max=10196, avg=5950.36, stdev=2399.95, samples=11 00:10:39.178 write: IOPS=6844, BW=26.7MiB/s (28.0MB/s)(140MiB/5237msec); 0 zone resets 00:10:39.178 slat (usec): min=3, max=5714, avg=53.53, stdev=143.85 00:10:39.178 clat (usec): min=741, max=16972, avg=6534.65, stdev=1960.32 00:10:39.178 lat (usec): min=777, max=16991, avg=6588.18, stdev=1976.16 00:10:39.178 clat percentiles (usec): 00:10:39.178 | 1.00th=[ 2409], 5.00th=[ 3228], 10.00th=[ 3720], 20.00th=[ 4424], 00:10:39.178 | 30.00th=[ 5211], 40.00th=[ 6521], 50.00th=[ 7177], 60.00th=[ 7570], 00:10:39.178 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8848], 00:10:39.178 | 99.00th=[11207], 99.50th=[12256], 99.90th=[14353], 99.95th=[15008], 00:10:39.178 | 99.99th=[16319] 00:10:39.178 bw ( KiB/s): min= 7632, max=39832, per=87.05%, avg=23834.91, stdev=9385.81, samples=11 00:10:39.178 iops : min= 1908, max= 9958, avg=5958.73, stdev=2346.45, samples=11 00:10:39.178 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.07% 00:10:39.178 lat (msec) : 2=0.35%, 4=7.76%, 10=86.63%, 20=5.16% 00:10:39.179 cpu : usr=6.15%, sys=21.88%, ctx=6032, majf=0, minf=139 00:10:39.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:39.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.179 issued rwts: total=68231,35845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.179 00:10:39.179 Run status group 0 (all jobs): 00:10:39.179 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=267MiB (279MB), run=6003-6003msec 00:10:39.179 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=140MiB (147MB), run=5237-5237msec 00:10:39.179 00:10:39.179 Disk stats (read/write): 00:10:39.179 nvme0n1: ios=67590/35083, merge=0/0, ticks=500460/213401, in_queue=713861, util=98.65% 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:39.179 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.438 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:39.438 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:39.438 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:39.438 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:39.438 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:39.438 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.698 rmmod nvme_tcp 00:10:39.698 rmmod nvme_fabrics 00:10:39.698 rmmod nvme_keyring 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 77622 ']' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 77622 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 77622 ']' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 77622 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77622 00:10:39.698 killing process with pid 77622 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77622' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 77622 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 77622 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:39.698 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:39.957 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.957 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:40.216 00:10:40.216 real 0m19.920s 00:10:40.216 user 1m13.664s 00:10:40.216 sys 0m10.176s 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.216 ************************************ 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:40.216 END TEST nvmf_target_multipath 00:10:40.216 ************************************ 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.216 ************************************ 00:10:40.216 START TEST nvmf_zcopy 00:10:40.216 ************************************ 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.216 * Looking for test storage... 00:10:40.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:40.216 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:40.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.475 --rc genhtml_branch_coverage=1 00:10:40.475 --rc genhtml_function_coverage=1 00:10:40.475 --rc genhtml_legend=1 00:10:40.475 --rc geninfo_all_blocks=1 00:10:40.475 --rc geninfo_unexecuted_blocks=1 00:10:40.475 00:10:40.475 ' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:40.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.475 --rc genhtml_branch_coverage=1 00:10:40.475 --rc genhtml_function_coverage=1 00:10:40.475 --rc genhtml_legend=1 00:10:40.475 --rc geninfo_all_blocks=1 00:10:40.475 --rc geninfo_unexecuted_blocks=1 00:10:40.475 00:10:40.475 ' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:40.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.475 --rc genhtml_branch_coverage=1 00:10:40.475 --rc genhtml_function_coverage=1 00:10:40.475 --rc genhtml_legend=1 00:10:40.475 --rc geninfo_all_blocks=1 00:10:40.475 --rc geninfo_unexecuted_blocks=1 00:10:40.475 00:10:40.475 ' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:40.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.475 --rc genhtml_branch_coverage=1 00:10:40.475 --rc genhtml_function_coverage=1 00:10:40.475 --rc genhtml_legend=1 00:10:40.475 --rc geninfo_all_blocks=1 00:10:40.475 --rc geninfo_unexecuted_blocks=1 00:10:40.475 00:10:40.475 ' 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.475 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.476 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:40.476 Cannot find device "nvmf_init_br" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:40.476 Cannot find device "nvmf_init_br2" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:40.476 Cannot find device "nvmf_tgt_br" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.476 Cannot find device "nvmf_tgt_br2" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:40.476 Cannot find device "nvmf_init_br" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:40.476 Cannot find device "nvmf_init_br2" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:40.476 Cannot find device "nvmf_tgt_br" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:40.476 Cannot find device "nvmf_tgt_br2" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:40.476 Cannot find device "nvmf_br" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:40.476 Cannot find device "nvmf_init_if" 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:40.476 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:40.477 Cannot find device "nvmf_init_if2" 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:40.477 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:40.735 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:40.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:40.736 00:10:40.736 --- 10.0.0.3 ping statistics --- 00:10:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.736 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:40.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:40.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:10:40.736 00:10:40.736 --- 10.0.0.4 ping statistics --- 00:10:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.736 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:40.736 00:10:40.736 --- 10.0.0.1 ping statistics --- 00:10:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.736 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:40.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:40.736 00:10:40.736 --- 10.0.0.2 ping statistics --- 00:10:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.736 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=78148 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 78148 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 78148 ']' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.736 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.994 [2024-12-10 10:24:15.993099] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:40.994 [2024-12-10 10:24:15.993244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.994 [2024-12-10 10:24:16.128230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.994 [2024-12-10 10:24:16.163344] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.994 [2024-12-10 10:24:16.163438] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.994 [2024-12-10 10:24:16.163450] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.994 [2024-12-10 10:24:16.163457] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.994 [2024-12-10 10:24:16.163464] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.994 [2024-12-10 10:24:16.163490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.994 [2024-12-10 10:24:16.193030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 [2024-12-10 10:24:16.301457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 [2024-12-10 10:24:16.317641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 malloc0 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:41.253 { 00:10:41.253 "params": { 00:10:41.253 "name": "Nvme$subsystem", 00:10:41.253 "trtype": "$TEST_TRANSPORT", 00:10:41.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.253 "adrfam": "ipv4", 00:10:41.253 "trsvcid": "$NVMF_PORT", 00:10:41.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.253 "hdgst": ${hdgst:-false}, 00:10:41.253 "ddgst": ${ddgst:-false} 00:10:41.253 }, 00:10:41.253 "method": "bdev_nvme_attach_controller" 00:10:41.253 } 00:10:41.253 EOF 00:10:41.253 )") 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:41.253 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:41.253 "params": { 00:10:41.253 "name": "Nvme1", 00:10:41.253 "trtype": "tcp", 00:10:41.253 "traddr": "10.0.0.3", 00:10:41.253 "adrfam": "ipv4", 00:10:41.253 "trsvcid": "4420", 00:10:41.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.253 "hdgst": false, 00:10:41.253 "ddgst": false 00:10:41.253 }, 00:10:41.253 "method": "bdev_nvme_attach_controller" 00:10:41.253 }' 00:10:41.253 [2024-12-10 10:24:16.434519] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:41.253 [2024-12-10 10:24:16.434609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78168 ] 00:10:41.512 [2024-12-10 10:24:16.578615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.512 [2024-12-10 10:24:16.621024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.512 [2024-12-10 10:24:16.662868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.771 Running I/O for 10 seconds... 00:10:43.645 6138.00 IOPS, 47.95 MiB/s [2024-12-10T10:24:19.809Z] 6206.50 IOPS, 48.49 MiB/s [2024-12-10T10:24:20.771Z] 6207.67 IOPS, 48.50 MiB/s [2024-12-10T10:24:22.149Z] 6217.25 IOPS, 48.57 MiB/s [2024-12-10T10:24:23.108Z] 6218.20 IOPS, 48.58 MiB/s [2024-12-10T10:24:24.043Z] 6231.83 IOPS, 48.69 MiB/s [2024-12-10T10:24:24.980Z] 6235.29 IOPS, 48.71 MiB/s [2024-12-10T10:24:25.916Z] 6242.75 IOPS, 48.77 MiB/s [2024-12-10T10:24:26.852Z] 6244.44 IOPS, 48.78 MiB/s [2024-12-10T10:24:26.852Z] 6216.10 IOPS, 48.56 MiB/s 00:10:51.625 Latency(us) 00:10:51.625 [2024-12-10T10:24:26.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.625 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:51.625 Verification LBA range: start 0x0 length 0x1000 00:10:51.625 Nvme1n1 : 10.02 6217.28 48.57 0.00 0.00 20521.25 1779.90 34793.66 00:10:51.625 [2024-12-10T10:24:26.852Z] =================================================================================================================== 00:10:51.625 [2024-12-10T10:24:26.852Z] Total : 6217.28 48.57 0.00 0.00 20521.25 1779.90 34793.66 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=78291 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:51.884 { 00:10:51.884 "params": { 00:10:51.884 "name": "Nvme$subsystem", 00:10:51.884 "trtype": "$TEST_TRANSPORT", 00:10:51.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.884 "adrfam": "ipv4", 00:10:51.884 "trsvcid": "$NVMF_PORT", 00:10:51.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.884 "hdgst": ${hdgst:-false}, 00:10:51.884 "ddgst": ${ddgst:-false} 00:10:51.884 }, 00:10:51.884 "method": "bdev_nvme_attach_controller" 00:10:51.884 } 00:10:51.884 EOF 00:10:51.884 )") 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:51.884 [2024-12-10 10:24:26.940262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.884 [2024-12-10 10:24:26.940323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:51.884 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:51.885 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:51.885 "params": { 00:10:51.885 "name": "Nvme1", 00:10:51.885 "trtype": "tcp", 00:10:51.885 "traddr": "10.0.0.3", 00:10:51.885 "adrfam": "ipv4", 00:10:51.885 "trsvcid": "4420", 00:10:51.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.885 "hdgst": false, 00:10:51.885 "ddgst": false 00:10:51.885 }, 00:10:51.885 "method": "bdev_nvme_attach_controller" 00:10:51.885 }' 00:10:51.885 [2024-12-10 10:24:26.952260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:26.952292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:26.964253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:26.964296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:26.976254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:26.976296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:26.988295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:26.988337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.000263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.000304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.003671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:51.885 [2024-12-10 10:24:27.003783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78291 ] 00:10:51.885 [2024-12-10 10:24:27.012266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.012307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.024268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.024308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.036287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.036330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.048269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.048309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.060286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.060326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.072274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.072314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.084297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.084339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.096319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.096364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.885 [2024-12-10 10:24:27.108317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.885 [2024-12-10 10:24:27.108344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.120314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.120358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.132314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.132357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.144311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.144353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.149211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.144 [2024-12-10 10:24:27.156336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.156388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.168336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.168390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.180339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.180391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.185163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.144 [2024-12-10 10:24:27.192323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.192365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.204386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.204468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.216356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.216450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.223480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.144 [2024-12-10 10:24:27.228353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.228424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.240369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.240450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.144 [2024-12-10 10:24:27.252478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.144 [2024-12-10 10:24:27.252540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.264375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.264463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.276407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.276443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.288460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.288549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.300432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.300491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.312427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.312493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 Running I/O for 5 seconds... 00:10:52.145 [2024-12-10 10:24:27.324442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.324512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.342385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.342448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.145 [2024-12-10 10:24:27.357448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.145 [2024-12-10 10:24:27.357523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.373307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.373371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.390928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.390965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.406686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.406753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.416891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.416955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.433956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.434008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.448533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.448569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.464887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.464952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.481574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.481640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.499414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.499509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.515469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.515531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.525979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.526046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.541741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.541780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.557515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.557576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.573638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.573688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.583147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.583213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.599784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.599820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.404 [2024-12-10 10:24:27.615000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.404 [2024-12-10 10:24:27.615070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.632269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.632307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.647629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.647665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.664309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.664376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.680880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.680931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.697975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.698043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.714190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.714242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.731223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.731287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.746320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.746370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.763174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.763213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.778413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.778473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.795328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.795368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.812110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.812161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.829042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.829091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.847140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.847189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.862558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.862605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.663 [2024-12-10 10:24:27.880495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.663 [2024-12-10 10:24:27.880558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.894703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.894751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.911210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.911245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.927861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.927899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.945486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.945550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.961822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.961875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.977366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.977444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:27.994267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:27.994318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.009854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.009905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.018979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.019029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.034811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.034859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.050779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.050814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.069043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.069092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.084428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.084494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.101469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.101568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-12-10 10:24:28.116353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-12-10 10:24:28.116431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 10:24:28.132651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 10:24:28.132694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.923 [2024-12-10 10:24:28.142646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.923 [2024-12-10 10:24:28.142691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.158895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.158929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.176634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.176681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.192478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.192535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.210177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.210226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.226049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.226115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.242566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.242599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.260285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.260322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.276870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-12-10 10:24:28.276919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-12-10 10:24:28.293681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.293731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 10:24:28.310490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.310588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 10789.00 IOPS, 84.29 MiB/s [2024-12-10T10:24:28.409Z] [2024-12-10 10:24:28.328157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.328207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 10:24:28.342627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.342675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 10:24:28.359647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.359710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 10:24:28.375044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.375093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 10:24:28.384602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.384650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.182 [2024-12-10 10:24:28.400663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.182 [2024-12-10 10:24:28.400710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.410679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.410728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.426896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.426950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.442170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.442221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.458094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.458132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.475317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.475369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.491820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.491858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.508689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.508740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.525872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.525922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.441 [2024-12-10 10:24:28.542251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.441 [2024-12-10 10:24:28.542302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.558272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.558306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.576188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.576225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.591984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.592035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.602443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.602505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.618847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.618914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.634578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.634631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.644817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.644882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.442 [2024-12-10 10:24:28.661392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.442 [2024-12-10 10:24:28.661484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.676297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.676345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.692467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.692539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.708761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.708812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.725292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.725332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.742017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.742084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.758503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.758553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.773335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.773383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.789308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.789357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.806268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.806319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.821709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.821758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.837460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.701 [2024-12-10 10:24:28.837522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.701 [2024-12-10 10:24:28.854205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.702 [2024-12-10 10:24:28.854238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.702 [2024-12-10 10:24:28.871279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.702 [2024-12-10 10:24:28.871317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.702 [2024-12-10 10:24:28.887370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.702 [2024-12-10 10:24:28.887481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.702 [2024-12-10 10:24:28.904316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.702 [2024-12-10 10:24:28.904382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.702 [2024-12-10 10:24:28.921149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.702 [2024-12-10 10:24:28.921188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:28.937987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:28.938055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:28.957145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:28.957196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:28.972842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:28.972892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:28.989939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:28.989989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.004246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.004301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.019991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.020053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.029044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.029094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.044482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.044532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.059434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.059497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.068785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.068864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.084520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.084570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.101301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.101351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.117617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.117668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.134200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.134251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.153134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.153186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.167225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.167275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.961 [2024-12-10 10:24:29.182909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.961 [2024-12-10 10:24:29.182974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.200578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.200628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.216134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.216184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.225076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.225126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.240851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.240901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.255821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.255872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.272207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.272273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.288788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.288838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.305773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.305810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.320754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.320820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 10956.00 IOPS, 85.59 MiB/s [2024-12-10T10:24:29.448Z] [2024-12-10 10:24:29.338256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.338290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.352730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.352781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.368564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.368613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.385140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.385190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.403493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.403543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.417574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.417623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.432900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.432949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.221 [2024-12-10 10:24:29.445535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.221 [2024-12-10 10:24:29.445596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.480 [2024-12-10 10:24:29.460991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.480 [2024-12-10 10:24:29.461042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.480 [2024-12-10 10:24:29.470967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.480 [2024-12-10 10:24:29.471017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.480 [2024-12-10 10:24:29.486198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.480 [2024-12-10 10:24:29.486247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.480 [2024-12-10 10:24:29.501760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.480 [2024-12-10 10:24:29.501810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.480 [2024-12-10 10:24:29.520034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.520082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.534885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.534934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.549841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.549888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.559774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.559813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.575770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.575808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.590369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.590479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.607355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.607388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.624130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.624178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.641177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.641226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.656778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.656842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.667687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.667725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.682905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.682955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.481 [2024-12-10 10:24:29.699482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.481 [2024-12-10 10:24:29.699532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.716217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.716266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.733122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.733160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.747769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.747809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.764174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.764224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.781982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.782048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.797970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.798035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.815121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.815170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.832362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.832436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.848085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.848135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.740 [2024-12-10 10:24:29.865925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.740 [2024-12-10 10:24:29.865990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.741 [2024-12-10 10:24:29.882097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.741 [2024-12-10 10:24:29.882138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.741 [2024-12-10 10:24:29.900494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.741 [2024-12-10 10:24:29.900576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.741 [2024-12-10 10:24:29.914983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.741 [2024-12-10 10:24:29.915031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.741 [2024-12-10 10:24:29.931437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.741 [2024-12-10 10:24:29.931485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.741 [2024-12-10 10:24:29.946824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.741 [2024-12-10 10:24:29.946874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.741 [2024-12-10 10:24:29.963552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.741 [2024-12-10 10:24:29.963625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:29.978525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:29.978576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:29.994374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:29.994452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.012367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.012435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.027006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.027057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.043808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.043847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.059573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.059647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.078227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.078278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.092671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.092721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.108920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.108969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.125391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.125466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.142872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.142937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.159667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.159704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.174947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.174997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.186289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.186339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.202703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.202752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-12-10 10:24:30.218767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-12-10 10:24:30.218817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.228638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.228675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.243808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.243846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.260500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.260548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.278962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.279013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.294364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.294446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.310524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.310574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 11232.67 IOPS, 87.76 MiB/s [2024-12-10T10:24:30.509Z] [2024-12-10 10:24:30.327629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.327667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.343704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.343741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.360785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.360836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.377357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.377432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.394247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.394295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.411084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.411134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.427670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.427708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.444140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.444191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.454065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.454102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.466755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.466793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.481757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.481807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.282 [2024-12-10 10:24:30.498590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.282 [2024-12-10 10:24:30.498641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.515068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.515123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.532992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.533043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.549128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.549177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.565504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.565554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.581849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.581898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.590740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.590804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.606202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.606254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.621495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.621545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.637143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.637193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.656106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.656155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.671020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.671071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.687642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.687677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.704328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.704378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.721161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.721213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.737097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.737150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.755449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.755503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.562 [2024-12-10 10:24:30.770762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.562 [2024-12-10 10:24:30.770808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.821 [2024-12-10 10:24:30.789988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.790072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.805549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.805600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.822576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.822626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.838679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.838729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.856144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.856195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.871724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.871775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.880771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.880836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.895897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.895961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.912367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.912445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.928354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.928434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.947532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.947590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.962872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.962922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.972354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.972429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:30.988331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:30.988369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:31.005280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:31.005332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:31.021568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:31.021619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.822 [2024-12-10 10:24:31.039552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.822 [2024-12-10 10:24:31.039625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.055002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.055052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.064110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.064160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.080234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.080282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.090180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.090229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.104980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.105030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.114439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.114488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.130211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.130262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.145764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.145828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.164088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.081 [2024-12-10 10:24:31.164139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.081 [2024-12-10 10:24:31.178580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.178628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.194877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.194927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.211620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.211671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.228258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.228307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.244651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.244686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.261406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.261506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.276885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.276934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.285849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.285898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.082 [2024-12-10 10:24:31.301976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.082 [2024-12-10 10:24:31.302026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.311810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.311861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 11411.75 IOPS, 89.15 MiB/s [2024-12-10T10:24:31.568Z] [2024-12-10 10:24:31.326386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.326464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.335541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.335616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.351796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.351846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.369911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.369959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.386107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.386157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.403553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.403626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.417972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.418021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.434574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.434624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.450136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.450185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.467742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.467779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.484054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.484100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.500946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.501015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.518804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.518853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.341 [2024-12-10 10:24:31.533550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.341 [2024-12-10 10:24:31.533600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.342 [2024-12-10 10:24:31.542338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.342 [2024-12-10 10:24:31.542387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.342 [2024-12-10 10:24:31.558060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.342 [2024-12-10 10:24:31.558109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.573729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.573778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.591234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.591283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.607043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.607109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.623657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.623708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.640598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.640646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.657194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.657243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.673180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.673229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.689147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.689195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.698062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.698111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.713704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.713752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.728128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.728165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.743078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.743128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.758780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.758818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.768852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.768888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.784037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.784073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.795756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.795786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.811564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.811608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.601 [2024-12-10 10:24:31.826483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.601 [2024-12-10 10:24:31.826531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.842653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.842703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.852212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.852261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.868700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.868734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.884700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.884750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.901727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.901777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.918551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.918602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.934365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.934442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.945929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.945978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.962598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.861 [2024-12-10 10:24:31.962647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.861 [2024-12-10 10:24:31.978613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:31.978660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.862 [2024-12-10 10:24:31.995275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:31.995324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.862 [2024-12-10 10:24:32.013629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:32.013680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.862 [2024-12-10 10:24:32.027877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:32.027925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.862 [2024-12-10 10:24:32.044557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:32.044594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.862 [2024-12-10 10:24:32.060545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:32.060583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.862 [2024-12-10 10:24:32.078105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.862 [2024-12-10 10:24:32.078156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.093355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.093428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.110280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.110330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.125436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.125495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.142220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.142269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.158084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.158134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.176051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.176100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.190726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.190776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.206612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.206662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.223381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.223474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.240785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.240834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.256661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.256709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.274528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.274577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.292152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.292203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.306519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.306583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 [2024-12-10 10:24:32.322430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.322492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 11537.20 IOPS, 90.13 MiB/s [2024-12-10T10:24:32.349Z] [2024-12-10 10:24:32.334558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.334608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.122 00:10:57.122 Latency(us) 00:10:57.122 [2024-12-10T10:24:32.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.122 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:57.122 Nvme1n1 : 5.01 11541.00 90.16 0.00 0.00 11078.27 4140.68 18945.86 00:10:57.122 [2024-12-10T10:24:32.349Z] =================================================================================================================== 00:10:57.122 [2024-12-10T10:24:32.349Z] Total : 11541.00 90.16 0.00 0.00 11078.27 4140.68 18945.86 00:10:57.122 [2024-12-10 10:24:32.346550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.122 [2024-12-10 10:24:32.346598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.358585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.358646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.370593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.370654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.382591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.382653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.394596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.394654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.406614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.406683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.418594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.418648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.430605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.430664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.442596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.442649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.454612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.454667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.466587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.466630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 [2024-12-10 10:24:32.478588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.382 [2024-12-10 10:24:32.478631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.382 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (78291) - No such process 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 78291 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.382 delay0 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:57.382 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.383 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.383 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.383 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:57.642 [2024-12-10 10:24:32.681785] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:04.209 Initializing NVMe Controllers 00:11:04.209 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.209 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:04.209 Initialization complete. Launching workers. 00:11:04.209 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 72 00:11:04.209 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 359, failed to submit 33 00:11:04.209 success 211, unsuccessful 148, failed 0 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.209 rmmod nvme_tcp 00:11:04.209 rmmod nvme_fabrics 00:11:04.209 rmmod nvme_keyring 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 78148 ']' 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 78148 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 78148 ']' 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 78148 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78148 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:04.209 killing process with pid 78148 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78148' 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 78148 00:11:04.209 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 78148 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:04.209 00:11:04.209 real 0m24.004s 00:11:04.209 user 0m39.340s 00:11:04.209 sys 0m6.596s 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.209 ************************************ 00:11:04.209 END TEST nvmf_zcopy 00:11:04.209 ************************************ 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.209 ************************************ 00:11:04.209 START TEST nvmf_nmic 00:11:04.209 ************************************ 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.209 * Looking for test storage... 00:11:04.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:04.209 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:04.468 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.469 --rc genhtml_branch_coverage=1 00:11:04.469 --rc genhtml_function_coverage=1 00:11:04.469 --rc genhtml_legend=1 00:11:04.469 --rc geninfo_all_blocks=1 00:11:04.469 --rc geninfo_unexecuted_blocks=1 00:11:04.469 00:11:04.469 ' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.469 --rc genhtml_branch_coverage=1 00:11:04.469 --rc genhtml_function_coverage=1 00:11:04.469 --rc genhtml_legend=1 00:11:04.469 --rc geninfo_all_blocks=1 00:11:04.469 --rc geninfo_unexecuted_blocks=1 00:11:04.469 00:11:04.469 ' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.469 --rc genhtml_branch_coverage=1 00:11:04.469 --rc genhtml_function_coverage=1 00:11:04.469 --rc genhtml_legend=1 00:11:04.469 --rc geninfo_all_blocks=1 00:11:04.469 --rc geninfo_unexecuted_blocks=1 00:11:04.469 00:11:04.469 ' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:04.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.469 --rc genhtml_branch_coverage=1 00:11:04.469 --rc genhtml_function_coverage=1 00:11:04.469 --rc genhtml_legend=1 00:11:04.469 --rc geninfo_all_blocks=1 00:11:04.469 --rc geninfo_unexecuted_blocks=1 00:11:04.469 00:11:04.469 ' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:04.469 Cannot find device "nvmf_init_br" 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:04.469 Cannot find device "nvmf_init_br2" 00:11:04.469 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:04.470 Cannot find device "nvmf_tgt_br" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.470 Cannot find device "nvmf_tgt_br2" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:04.470 Cannot find device "nvmf_init_br" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:04.470 Cannot find device "nvmf_init_br2" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:04.470 Cannot find device "nvmf_tgt_br" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:04.470 Cannot find device "nvmf_tgt_br2" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:04.470 Cannot find device "nvmf_br" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:04.470 Cannot find device "nvmf_init_if" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:04.470 Cannot find device "nvmf_init_if2" 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.470 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:04.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:11:04.728 00:11:04.728 --- 10.0.0.3 ping statistics --- 00:11:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.728 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:04.728 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:04.728 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:11:04.728 00:11:04.728 --- 10.0.0.4 ping statistics --- 00:11:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.728 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:04.728 00:11:04.728 --- 10.0.0.1 ping statistics --- 00:11:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.728 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:04.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:04.728 00:11:04.728 --- 10.0.0.2 ping statistics --- 00:11:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.728 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:04.728 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=78660 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 78660 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 78660 ']' 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.729 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:04.986 [2024-12-10 10:24:40.013105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:04.986 [2024-12-10 10:24:40.013203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.986 [2024-12-10 10:24:40.159476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.986 [2024-12-10 10:24:40.197745] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.986 [2024-12-10 10:24:40.197800] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.986 [2024-12-10 10:24:40.197812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.986 [2024-12-10 10:24:40.197820] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.986 [2024-12-10 10:24:40.197828] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.986 [2024-12-10 10:24:40.197918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.986 [2024-12-10 10:24:40.197979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.986 [2024-12-10 10:24:40.198637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.986 [2024-12-10 10:24:40.198640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.244 [2024-12-10 10:24:40.229813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.244 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 [2024-12-10 10:24:40.331312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 Malloc0 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 [2024-12-10 10:24:40.373988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 test case1: single bdev can't be used in multiple subsystems 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 [2024-12-10 10:24:40.397836] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:05.245 [2024-12-10 10:24:40.397881] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:05.245 [2024-12-10 10:24:40.397894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.245 request: 00:11:05.245 { 00:11:05.245 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:05.245 "namespace": { 00:11:05.245 "bdev_name": "Malloc0", 00:11:05.245 "no_auto_visible": false 00:11:05.245 }, 00:11:05.245 "method": "nvmf_subsystem_add_ns", 00:11:05.245 "req_id": 1 00:11:05.245 } 00:11:05.245 Got JSON-RPC error response 00:11:05.245 response: 00:11:05.245 { 00:11:05.245 "code": -32602, 00:11:05.245 "message": "Invalid parameters" 00:11:05.245 } 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:05.245 Adding namespace failed - expected result. 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:05.245 test case2: host connect to nvmf target in multiple paths 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 [2024-12-10 10:24:40.409972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:05.503 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:05.503 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.503 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:05.503 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.503 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:05.503 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:08.030 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:08.030 [global] 00:11:08.030 thread=1 00:11:08.030 invalidate=1 00:11:08.030 rw=write 00:11:08.030 time_based=1 00:11:08.030 runtime=1 00:11:08.030 ioengine=libaio 00:11:08.030 direct=1 00:11:08.030 bs=4096 00:11:08.030 iodepth=1 00:11:08.030 norandommap=0 00:11:08.030 numjobs=1 00:11:08.030 00:11:08.030 verify_dump=1 00:11:08.030 verify_backlog=512 00:11:08.030 verify_state_save=0 00:11:08.030 do_verify=1 00:11:08.030 verify=crc32c-intel 00:11:08.030 [job0] 00:11:08.030 filename=/dev/nvme0n1 00:11:08.030 Could not set queue depth (nvme0n1) 00:11:08.030 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.030 fio-3.35 00:11:08.030 Starting 1 thread 00:11:08.964 00:11:08.964 job0: (groupid=0, jobs=1): err= 0: pid=78744: Tue Dec 10 10:24:43 2024 00:11:08.964 read: IOPS=2855, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:11:08.964 slat (usec): min=12, max=108, avg=16.76, stdev= 4.53 00:11:08.964 clat (usec): min=143, max=732, avg=182.15, stdev=29.32 00:11:08.964 lat (usec): min=156, max=752, avg=198.91, stdev=31.00 00:11:08.964 clat percentiles (usec): 00:11:08.964 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:11:08.964 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:11:08.964 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:11:08.964 | 99.00th=[ 251], 99.50th=[ 277], 99.90th=[ 594], 99.95th=[ 603], 00:11:08.964 | 99.99th=[ 734] 00:11:08.964 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:08.964 slat (nsec): min=15130, max=99860, avg=23275.16, stdev=4620.45 00:11:08.964 clat (usec): min=88, max=296, avg=113.46, stdev=15.27 00:11:08.964 lat (usec): min=108, max=374, avg=136.74, stdev=17.49 00:11:08.964 clat percentiles (usec): 00:11:08.964 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:11:08.964 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 112], 60.00th=[ 115], 00:11:08.964 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 135], 95.00th=[ 141], 00:11:08.964 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 277], 00:11:08.964 | 99.99th=[ 297] 00:11:08.964 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:08.964 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:08.964 lat (usec) : 100=9.24%, 250=90.22%, 500=0.46%, 750=0.08% 00:11:08.964 cpu : usr=2.70%, sys=9.30%, ctx=5930, majf=0, minf=5 00:11:08.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.964 issued rwts: total=2858,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.964 00:11:08.964 Run status group 0 (all jobs): 00:11:08.964 READ: bw=11.2MiB/s (11.7MB/s), 11.2MiB/s-11.2MiB/s (11.7MB/s-11.7MB/s), io=11.2MiB (11.7MB), run=1001-1001msec 00:11:08.964 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:08.964 00:11:08.964 Disk stats (read/write): 00:11:08.964 nvme0n1: ios=2610/2723, merge=0/0, ticks=509/334, in_queue=843, util=91.58% 00:11:08.964 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:08.964 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.964 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.965 rmmod nvme_tcp 00:11:08.965 rmmod nvme_fabrics 00:11:08.965 rmmod nvme_keyring 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 78660 ']' 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 78660 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 78660 ']' 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 78660 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.965 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78660 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.223 killing process with pid 78660 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78660' 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 78660 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 78660 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:09.223 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:09.482 00:11:09.482 real 0m5.366s 00:11:09.482 user 0m15.537s 00:11:09.482 sys 0m2.258s 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.482 ************************************ 00:11:09.482 END TEST nvmf_nmic 00:11:09.482 ************************************ 00:11:09.482 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.741 ************************************ 00:11:09.741 START TEST nvmf_fio_target 00:11:09.741 ************************************ 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:09.741 * Looking for test storage... 00:11:09.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.741 --rc genhtml_branch_coverage=1 00:11:09.741 --rc genhtml_function_coverage=1 00:11:09.741 --rc genhtml_legend=1 00:11:09.741 --rc geninfo_all_blocks=1 00:11:09.741 --rc geninfo_unexecuted_blocks=1 00:11:09.741 00:11:09.741 ' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.741 --rc genhtml_branch_coverage=1 00:11:09.741 --rc genhtml_function_coverage=1 00:11:09.741 --rc genhtml_legend=1 00:11:09.741 --rc geninfo_all_blocks=1 00:11:09.741 --rc geninfo_unexecuted_blocks=1 00:11:09.741 00:11:09.741 ' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.741 --rc genhtml_branch_coverage=1 00:11:09.741 --rc genhtml_function_coverage=1 00:11:09.741 --rc genhtml_legend=1 00:11:09.741 --rc geninfo_all_blocks=1 00:11:09.741 --rc geninfo_unexecuted_blocks=1 00:11:09.741 00:11:09.741 ' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.741 --rc genhtml_branch_coverage=1 00:11:09.741 --rc genhtml_function_coverage=1 00:11:09.741 --rc genhtml_legend=1 00:11:09.741 --rc geninfo_all_blocks=1 00:11:09.741 --rc geninfo_unexecuted_blocks=1 00:11:09.741 00:11:09.741 ' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.741 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.742 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:09.742 Cannot find device "nvmf_init_br" 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:09.742 Cannot find device "nvmf_init_br2" 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:09.742 Cannot find device "nvmf_tgt_br" 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:09.742 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.001 Cannot find device "nvmf_tgt_br2" 00:11:10.001 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:10.001 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:10.001 Cannot find device "nvmf_init_br" 00:11:10.001 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:10.001 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:10.001 Cannot find device "nvmf_init_br2" 00:11:10.001 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:10.001 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:10.001 Cannot find device "nvmf_tgt_br" 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:10.001 Cannot find device "nvmf_tgt_br2" 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:10.001 Cannot find device "nvmf_br" 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:10.001 Cannot find device "nvmf_init_if" 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:10.001 Cannot find device "nvmf_init_if2" 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:10.001 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:10.002 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:10.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:10.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:11:10.259 00:11:10.259 --- 10.0.0.3 ping statistics --- 00:11:10.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.259 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:10.259 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:10.259 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:10.259 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:11:10.260 00:11:10.260 --- 10.0.0.4 ping statistics --- 00:11:10.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.260 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:10.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:10.260 00:11:10.260 --- 10.0.0.1 ping statistics --- 00:11:10.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.260 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:10.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:10.260 00:11:10.260 --- 10.0.0.2 ping statistics --- 00:11:10.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.260 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78977 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78977 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78977 ']' 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.260 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.260 [2024-12-10 10:24:45.392835] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:10.260 [2024-12-10 10:24:45.393194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.518 [2024-12-10 10:24:45.536234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.518 [2024-12-10 10:24:45.577675] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.518 [2024-12-10 10:24:45.577966] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.518 [2024-12-10 10:24:45.577992] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.518 [2024-12-10 10:24:45.578003] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.518 [2024-12-10 10:24:45.578012] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.518 [2024-12-10 10:24:45.578109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.518 [2024-12-10 10:24:45.578309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.518 [2024-12-10 10:24:45.578573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.518 [2024-12-10 10:24:45.578576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.518 [2024-12-10 10:24:45.612198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.518 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:10.777 [2024-12-10 10:24:46.003393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.054 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.323 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:11.323 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.581 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:11.581 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.840 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:11.840 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.100 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:12.100 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:12.359 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.618 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:12.618 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.186 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:13.186 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.186 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:13.186 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:13.445 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.704 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:13.704 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.272 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.272 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.272 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:14.530 [2024-12-10 10:24:49.691996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:14.530 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:14.789 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:15.047 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:15.306 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:15.306 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.306 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.306 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:15.306 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:15.306 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:17.208 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:17.467 [global] 00:11:17.467 thread=1 00:11:17.467 invalidate=1 00:11:17.467 rw=write 00:11:17.467 time_based=1 00:11:17.467 runtime=1 00:11:17.467 ioengine=libaio 00:11:17.467 direct=1 00:11:17.467 bs=4096 00:11:17.467 iodepth=1 00:11:17.467 norandommap=0 00:11:17.467 numjobs=1 00:11:17.467 00:11:17.467 verify_dump=1 00:11:17.467 verify_backlog=512 00:11:17.467 verify_state_save=0 00:11:17.467 do_verify=1 00:11:17.467 verify=crc32c-intel 00:11:17.467 [job0] 00:11:17.467 filename=/dev/nvme0n1 00:11:17.467 [job1] 00:11:17.467 filename=/dev/nvme0n2 00:11:17.467 [job2] 00:11:17.467 filename=/dev/nvme0n3 00:11:17.467 [job3] 00:11:17.467 filename=/dev/nvme0n4 00:11:17.467 Could not set queue depth (nvme0n1) 00:11:17.467 Could not set queue depth (nvme0n2) 00:11:17.467 Could not set queue depth (nvme0n3) 00:11:17.467 Could not set queue depth (nvme0n4) 00:11:17.467 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.467 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.467 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.467 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.467 fio-3.35 00:11:17.467 Starting 4 threads 00:11:18.843 00:11:18.843 job0: (groupid=0, jobs=1): err= 0: pid=79159: Tue Dec 10 10:24:53 2024 00:11:18.843 read: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:11:18.843 slat (nsec): min=11644, max=45374, avg=15416.68, stdev=4799.60 00:11:18.843 clat (usec): min=135, max=1525, avg=165.88, stdev=27.55 00:11:18.843 lat (usec): min=148, max=1538, avg=181.30, stdev=28.10 00:11:18.843 clat percentiles (usec): 00:11:18.843 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:11:18.843 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:11:18.843 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 184], 00:11:18.844 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 225], 99.95th=[ 330], 00:11:18.844 | 99.99th=[ 1532] 00:11:18.844 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.844 slat (usec): min=14, max=124, avg=22.12, stdev= 6.81 00:11:18.844 clat (usec): min=92, max=231, avg=126.62, stdev= 9.65 00:11:18.844 lat (usec): min=111, max=355, avg=148.74, stdev=12.37 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:11:18.844 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:11:18.844 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:11:18.844 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 180], 00:11:18.844 | 99.99th=[ 233] 00:11:18.844 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.844 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.844 lat (usec) : 100=0.12%, 250=99.85%, 500=0.02% 00:11:18.844 lat (msec) : 2=0.02% 00:11:18.844 cpu : usr=2.20%, sys=9.10%, ctx=6008, majf=0, minf=7 00:11:18.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 issued rwts: total=2936,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.844 job1: (groupid=0, jobs=1): err= 0: pid=79160: Tue Dec 10 10:24:53 2024 00:11:18.844 read: IOPS=2741, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:11:18.844 slat (nsec): min=10969, max=52963, avg=14702.00, stdev=4182.42 00:11:18.844 clat (usec): min=140, max=1698, avg=175.09, stdev=36.07 00:11:18.844 lat (usec): min=152, max=1712, avg=189.79, stdev=36.26 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:11:18.844 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:11:18.844 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 206], 00:11:18.844 | 99.00th=[ 241], 99.50th=[ 262], 99.90th=[ 537], 99.95th=[ 570], 00:11:18.844 | 99.99th=[ 1696] 00:11:18.844 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.844 slat (usec): min=13, max=106, avg=21.68, stdev= 6.61 00:11:18.844 clat (usec): min=102, max=212, avg=131.10, stdev=12.17 00:11:18.844 lat (usec): min=120, max=319, avg=152.78, stdev=13.79 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 122], 00:11:18.844 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:11:18.844 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:11:18.844 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 196], 00:11:18.844 | 99.99th=[ 212] 00:11:18.844 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.844 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.844 lat (usec) : 250=99.64%, 500=0.31%, 750=0.03% 00:11:18.844 lat (msec) : 2=0.02% 00:11:18.844 cpu : usr=2.70%, sys=8.10%, ctx=5816, majf=0, minf=7 00:11:18.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 issued rwts: total=2744,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.844 job2: (groupid=0, jobs=1): err= 0: pid=79161: Tue Dec 10 10:24:53 2024 00:11:18.844 read: IOPS=2598, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec) 00:11:18.844 slat (nsec): min=11720, max=44188, avg=14870.18, stdev=3514.01 00:11:18.844 clat (usec): min=148, max=798, avg=178.65, stdev=24.41 00:11:18.844 lat (usec): min=162, max=828, avg=193.52, stdev=25.11 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:11:18.844 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:11:18.844 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:11:18.844 | 99.00th=[ 217], 99.50th=[ 351], 99.90th=[ 441], 99.95th=[ 498], 00:11:18.844 | 99.99th=[ 799] 00:11:18.844 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.844 slat (usec): min=13, max=106, avg=21.91, stdev= 6.82 00:11:18.844 clat (usec): min=103, max=320, avg=136.54, stdev=11.19 00:11:18.844 lat (usec): min=127, max=351, avg=158.45, stdev=14.05 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:11:18.844 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:11:18.844 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:11:18.844 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 225], 00:11:18.844 | 99.99th=[ 322] 00:11:18.844 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.844 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.844 lat (usec) : 250=99.65%, 500=0.33%, 1000=0.02% 00:11:18.844 cpu : usr=1.70%, sys=8.90%, ctx=5676, majf=0, minf=15 00:11:18.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 issued rwts: total=2601,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.844 job3: (groupid=0, jobs=1): err= 0: pid=79162: Tue Dec 10 10:24:53 2024 00:11:18.844 read: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec) 00:11:18.844 slat (nsec): min=11627, max=77091, avg=14669.67, stdev=3036.03 00:11:18.844 clat (usec): min=137, max=5834, avg=183.14, stdev=150.96 00:11:18.844 lat (usec): min=162, max=5850, avg=197.81, stdev=151.12 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:11:18.844 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:11:18.844 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:11:18.844 | 99.00th=[ 212], 99.50th=[ 233], 99.90th=[ 2900], 99.95th=[ 4080], 00:11:18.844 | 99.99th=[ 5866] 00:11:18.844 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:18.844 slat (usec): min=14, max=120, avg=20.35, stdev= 3.86 00:11:18.844 clat (usec): min=108, max=221, avg=136.27, stdev= 9.78 00:11:18.844 lat (usec): min=128, max=342, avg=156.62, stdev=10.62 00:11:18.844 clat percentiles (usec): 00:11:18.844 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 129], 00:11:18.844 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:11:18.844 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 153], 00:11:18.844 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 180], 00:11:18.844 | 99.99th=[ 223] 00:11:18.844 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:18.844 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:18.844 lat (usec) : 250=99.84%, 500=0.04%, 750=0.04%, 1000=0.02% 00:11:18.844 lat (msec) : 2=0.02%, 4=0.02%, 10=0.04% 00:11:18.844 cpu : usr=2.40%, sys=7.60%, ctx=5649, majf=0, minf=8 00:11:18.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.844 issued rwts: total=2576,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.844 00:11:18.844 Run status group 0 (all jobs): 00:11:18.844 READ: bw=42.4MiB/s (44.4MB/s), 10.1MiB/s-11.5MiB/s (10.5MB/s-12.0MB/s), io=42.4MiB (44.5MB), run=1001-1001msec 00:11:18.844 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:11:18.844 00:11:18.844 Disk stats (read/write): 00:11:18.844 nvme0n1: ios=2610/2571, merge=0/0, ticks=477/342, in_queue=819, util=88.48% 00:11:18.844 nvme0n2: ios=2436/2560, merge=0/0, ticks=436/361, in_queue=797, util=87.82% 00:11:18.844 nvme0n3: ios=2283/2560, merge=0/0, ticks=415/380, in_queue=795, util=89.21% 00:11:18.844 nvme0n4: ios=2259/2560, merge=0/0, ticks=417/373, in_queue=790, util=89.46% 00:11:18.844 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:18.844 [global] 00:11:18.844 thread=1 00:11:18.844 invalidate=1 00:11:18.844 rw=randwrite 00:11:18.844 time_based=1 00:11:18.844 runtime=1 00:11:18.844 ioengine=libaio 00:11:18.844 direct=1 00:11:18.844 bs=4096 00:11:18.844 iodepth=1 00:11:18.844 norandommap=0 00:11:18.844 numjobs=1 00:11:18.844 00:11:18.844 verify_dump=1 00:11:18.844 verify_backlog=512 00:11:18.844 verify_state_save=0 00:11:18.844 do_verify=1 00:11:18.844 verify=crc32c-intel 00:11:18.844 [job0] 00:11:18.844 filename=/dev/nvme0n1 00:11:18.844 [job1] 00:11:18.844 filename=/dev/nvme0n2 00:11:18.844 [job2] 00:11:18.844 filename=/dev/nvme0n3 00:11:18.844 [job3] 00:11:18.844 filename=/dev/nvme0n4 00:11:18.844 Could not set queue depth (nvme0n1) 00:11:18.844 Could not set queue depth (nvme0n2) 00:11:18.844 Could not set queue depth (nvme0n3) 00:11:18.844 Could not set queue depth (nvme0n4) 00:11:18.844 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.844 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.844 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.844 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.844 fio-3.35 00:11:18.844 Starting 4 threads 00:11:20.220 00:11:20.220 job0: (groupid=0, jobs=1): err= 0: pid=79215: Tue Dec 10 10:24:55 2024 00:11:20.220 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:20.220 slat (nsec): min=11206, max=39705, avg=13626.80, stdev=2342.66 00:11:20.220 clat (usec): min=136, max=277, avg=161.13, stdev=12.25 00:11:20.220 lat (usec): min=148, max=292, avg=174.76, stdev=13.04 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:11:20.220 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:11:20.220 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:11:20.220 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 208], 99.95th=[ 212], 00:11:20.220 | 99.99th=[ 277] 00:11:20.220 write: IOPS=3177, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:11:20.220 slat (usec): min=13, max=1799, avg=20.47, stdev=32.84 00:11:20.220 clat (usec): min=3, max=2102, avg=122.00, stdev=46.13 00:11:20.220 lat (usec): min=108, max=2123, avg=142.48, stdev=55.54 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 110], 00:11:20.220 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:11:20.220 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 141], 00:11:20.220 | 99.00th=[ 169], 99.50th=[ 223], 99.90th=[ 562], 99.95th=[ 1237], 00:11:20.220 | 99.99th=[ 2114] 00:11:20.220 bw ( KiB/s): min=12288, max=12288, per=31.26%, avg=12288.00, stdev= 0.00, samples=1 00:11:20.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:20.220 lat (usec) : 4=0.03%, 100=2.16%, 250=97.62%, 500=0.13%, 750=0.03% 00:11:20.220 lat (msec) : 2=0.02%, 4=0.02% 00:11:20.220 cpu : usr=2.10%, sys=8.50%, ctx=6256, majf=0, minf=7 00:11:20.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.220 issued rwts: total=3072,3181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.220 job1: (groupid=0, jobs=1): err= 0: pid=79216: Tue Dec 10 10:24:55 2024 00:11:20.220 read: IOPS=2098, BW=8396KiB/s (8597kB/s)(8404KiB/1001msec) 00:11:20.220 slat (usec): min=11, max=109, avg=14.37, stdev= 5.49 00:11:20.220 clat (usec): min=93, max=2237, avg=247.91, stdev=59.09 00:11:20.220 lat (usec): min=169, max=2264, avg=262.28, stdev=59.38 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 231], 00:11:20.220 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:11:20.220 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:11:20.220 | 99.00th=[ 322], 99.50th=[ 474], 99.90th=[ 553], 99.95th=[ 898], 00:11:20.220 | 99.99th=[ 2245] 00:11:20.220 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:20.220 slat (nsec): min=14331, max=86549, avg=19179.34, stdev=3603.01 00:11:20.220 clat (usec): min=90, max=470, avg=153.28, stdev=43.77 00:11:20.220 lat (usec): min=109, max=489, avg=172.46, stdev=44.54 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 113], 00:11:20.220 | 30.00th=[ 119], 40.00th=[ 125], 50.00th=[ 135], 60.00th=[ 180], 00:11:20.220 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 215], 00:11:20.220 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 424], 99.95th=[ 433], 00:11:20.220 | 99.99th=[ 469] 00:11:20.220 bw ( KiB/s): min=10096, max=10096, per=25.68%, avg=10096.00, stdev= 0.00, samples=1 00:11:20.220 iops : min= 2524, max= 2524, avg=2524.00, stdev= 0.00, samples=1 00:11:20.220 lat (usec) : 100=1.74%, 250=72.56%, 500=25.55%, 750=0.11%, 1000=0.02% 00:11:20.220 lat (msec) : 4=0.02% 00:11:20.220 cpu : usr=1.30%, sys=6.60%, ctx=4676, majf=0, minf=15 00:11:20.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.220 issued rwts: total=2101,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.220 job2: (groupid=0, jobs=1): err= 0: pid=79217: Tue Dec 10 10:24:55 2024 00:11:20.220 read: IOPS=2402, BW=9610KiB/s (9841kB/s)(9620KiB/1001msec) 00:11:20.220 slat (nsec): min=10847, max=39542, avg=12577.64, stdev=1772.75 00:11:20.220 clat (usec): min=164, max=3907, avg=214.29, stdev=84.93 00:11:20.220 lat (usec): min=176, max=3919, avg=226.86, stdev=85.16 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:11:20.220 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:11:20.220 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 245], 00:11:20.220 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 750], 99.95th=[ 1729], 00:11:20.220 | 99.99th=[ 3916] 00:11:20.220 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:20.220 slat (nsec): min=13512, max=93646, avg=18434.60, stdev=3498.35 00:11:20.220 clat (usec): min=117, max=244, avg=155.92, stdev=16.26 00:11:20.220 lat (usec): min=135, max=337, avg=174.36, stdev=17.06 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:11:20.220 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:11:20.220 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:11:20.220 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 223], 99.95th=[ 235], 00:11:20.220 | 99.99th=[ 245] 00:11:20.220 bw ( KiB/s): min=11776, max=11776, per=29.96%, avg=11776.00, stdev= 0.00, samples=1 00:11:20.220 iops : min= 2944, max= 2944, avg=2944.00, stdev= 0.00, samples=1 00:11:20.220 lat (usec) : 250=98.55%, 500=1.35%, 750=0.06% 00:11:20.220 lat (msec) : 2=0.02%, 4=0.02% 00:11:20.220 cpu : usr=2.00%, sys=6.00%, ctx=4965, majf=0, minf=13 00:11:20.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.220 issued rwts: total=2405,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.220 job3: (groupid=0, jobs=1): err= 0: pid=79218: Tue Dec 10 10:24:55 2024 00:11:20.220 read: IOPS=1280, BW=5123KiB/s (5246kB/s)(5128KiB/1001msec) 00:11:20.220 slat (nsec): min=22461, max=54542, avg=28782.55, stdev=3593.55 00:11:20.220 clat (usec): min=156, max=692, avg=390.39, stdev=80.40 00:11:20.220 lat (usec): min=184, max=722, avg=419.17, stdev=80.20 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 188], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 297], 00:11:20.220 | 30.00th=[ 330], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 424], 00:11:20.220 | 70.00th=[ 449], 80.00th=[ 465], 90.00th=[ 482], 95.00th=[ 490], 00:11:20.220 | 99.00th=[ 537], 99.50th=[ 627], 99.90th=[ 668], 99.95th=[ 693], 00:11:20.220 | 99.99th=[ 693] 00:11:20.220 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:20.220 slat (usec): min=31, max=181, avg=39.69, stdev= 5.56 00:11:20.220 clat (usec): min=115, max=3147, avg=255.06, stdev=91.12 00:11:20.220 lat (usec): min=160, max=3185, avg=294.76, stdev=91.14 00:11:20.220 clat percentiles (usec): 00:11:20.220 | 1.00th=[ 174], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 219], 00:11:20.220 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:11:20.220 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 334], 00:11:20.220 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 1221], 99.95th=[ 3163], 00:11:20.220 | 99.99th=[ 3163] 00:11:20.220 bw ( KiB/s): min= 7640, max= 7640, per=19.44%, avg=7640.00, stdev= 0.00, samples=1 00:11:20.220 iops : min= 1910, max= 1910, avg=1910.00, stdev= 0.00, samples=1 00:11:20.221 lat (usec) : 250=35.81%, 500=62.60%, 750=1.49%, 1000=0.04% 00:11:20.221 lat (msec) : 2=0.04%, 4=0.04% 00:11:20.221 cpu : usr=1.90%, sys=8.30%, ctx=2819, majf=0, minf=13 00:11:20.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.221 issued rwts: total=1282,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.221 00:11:20.221 Run status group 0 (all jobs): 00:11:20.221 READ: bw=34.6MiB/s (36.3MB/s), 5123KiB/s-12.0MiB/s (5246kB/s-12.6MB/s), io=34.6MiB (36.3MB), run=1001-1001msec 00:11:20.221 WRITE: bw=38.4MiB/s (40.3MB/s), 6138KiB/s-12.4MiB/s (6285kB/s-13.0MB/s), io=38.4MiB (40.3MB), run=1001-1001msec 00:11:20.221 00:11:20.221 Disk stats (read/write): 00:11:20.221 nvme0n1: ios=2610/2890, merge=0/0, ticks=457/367, in_queue=824, util=89.18% 00:11:20.221 nvme0n2: ios=1986/2048, merge=0/0, ticks=518/332, in_queue=850, util=89.71% 00:11:20.221 nvme0n3: ios=2069/2270, merge=0/0, ticks=456/368, in_queue=824, util=89.75% 00:11:20.221 nvme0n4: ios=1024/1494, merge=0/0, ticks=386/407, in_queue=793, util=89.91% 00:11:20.221 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:20.221 [global] 00:11:20.221 thread=1 00:11:20.221 invalidate=1 00:11:20.221 rw=write 00:11:20.221 time_based=1 00:11:20.221 runtime=1 00:11:20.221 ioengine=libaio 00:11:20.221 direct=1 00:11:20.221 bs=4096 00:11:20.221 iodepth=128 00:11:20.221 norandommap=0 00:11:20.221 numjobs=1 00:11:20.221 00:11:20.221 verify_dump=1 00:11:20.221 verify_backlog=512 00:11:20.221 verify_state_save=0 00:11:20.221 do_verify=1 00:11:20.221 verify=crc32c-intel 00:11:20.221 [job0] 00:11:20.221 filename=/dev/nvme0n1 00:11:20.221 [job1] 00:11:20.221 filename=/dev/nvme0n2 00:11:20.221 [job2] 00:11:20.221 filename=/dev/nvme0n3 00:11:20.221 [job3] 00:11:20.221 filename=/dev/nvme0n4 00:11:20.221 Could not set queue depth (nvme0n1) 00:11:20.221 Could not set queue depth (nvme0n2) 00:11:20.221 Could not set queue depth (nvme0n3) 00:11:20.221 Could not set queue depth (nvme0n4) 00:11:20.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.221 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.221 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.221 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.221 fio-3.35 00:11:20.221 Starting 4 threads 00:11:21.595 00:11:21.595 job0: (groupid=0, jobs=1): err= 0: pid=79279: Tue Dec 10 10:24:56 2024 00:11:21.595 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:21.595 slat (usec): min=6, max=7181, avg=189.49, stdev=780.33 00:11:21.595 clat (usec): min=17204, max=31210, avg=23434.02, stdev=2312.09 00:11:21.595 lat (usec): min=17685, max=31373, avg=23623.51, stdev=2393.22 00:11:21.595 clat percentiles (usec): 00:11:21.595 | 1.00th=[17957], 5.00th=[19268], 10.00th=[20055], 20.00th=[22676], 00:11:21.595 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:11:21.595 | 70.00th=[23725], 80.00th=[24249], 90.00th=[26870], 95.00th=[27919], 00:11:21.595 | 99.00th=[29754], 99.50th=[30278], 99.90th=[31065], 99.95th=[31065], 00:11:21.595 | 99.99th=[31327] 00:11:21.595 write: IOPS=2796, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1005msec); 0 zone resets 00:11:21.595 slat (usec): min=12, max=6746, avg=174.41, stdev=565.90 00:11:21.595 clat (usec): min=4535, max=31040, avg=23700.82, stdev=2978.40 00:11:21.595 lat (usec): min=5949, max=31061, avg=23875.23, stdev=2990.02 00:11:21.595 clat percentiles (usec): 00:11:21.595 | 1.00th=[ 8979], 5.00th=[19530], 10.00th=[21890], 20.00th=[22676], 00:11:21.595 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:11:21.595 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27395], 95.00th=[28967], 00:11:21.595 | 99.00th=[30016], 99.50th=[30540], 99.90th=[31065], 99.95th=[31065], 00:11:21.595 | 99.99th=[31065] 00:11:21.595 bw ( KiB/s): min= 9304, max=12176, per=16.27%, avg=10740.00, stdev=2030.81, samples=2 00:11:21.595 iops : min= 2326, max= 3044, avg=2685.00, stdev=507.70, samples=2 00:11:21.595 lat (msec) : 10=0.61%, 20=7.36%, 50=92.03% 00:11:21.595 cpu : usr=2.29%, sys=9.66%, ctx=447, majf=0, minf=9 00:11:21.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:21.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.596 issued rwts: total=2560,2810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.596 job1: (groupid=0, jobs=1): err= 0: pid=79281: Tue Dec 10 10:24:56 2024 00:11:21.596 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:11:21.596 slat (usec): min=10, max=3989, avg=85.16, stdev=398.45 00:11:21.596 clat (usec): min=8796, max=13956, avg=11363.53, stdev=548.54 00:11:21.596 lat (usec): min=10747, max=13970, avg=11448.69, stdev=383.23 00:11:21.596 clat percentiles (usec): 00:11:21.596 | 1.00th=[ 8979], 5.00th=[10945], 10.00th=[11076], 20.00th=[11207], 00:11:21.596 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:11:21.596 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:11:21.596 | 99.00th=[13829], 99.50th=[13960], 99.90th=[13960], 99.95th=[13960], 00:11:21.596 | 99.99th=[13960] 00:11:21.596 write: IOPS=5845, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1002msec); 0 zone resets 00:11:21.596 slat (usec): min=13, max=2551, avg=82.47, stdev=336.72 00:11:21.596 clat (usec): min=313, max=11469, avg=10696.39, stdev=849.95 00:11:21.596 lat (usec): min=2283, max=12013, avg=10778.86, stdev=779.93 00:11:21.596 clat percentiles (usec): 00:11:21.596 | 1.00th=[ 5669], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 00:11:21.596 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:11:21.596 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11076], 95.00th=[11207], 00:11:21.596 | 99.00th=[11338], 99.50th=[11338], 99.90th=[11469], 99.95th=[11469], 00:11:21.596 | 99.99th=[11469] 00:11:21.596 bw ( KiB/s): min=21256, max=24576, per=34.73%, avg=22916.00, stdev=2347.59, samples=2 00:11:21.596 iops : min= 5314, max= 6144, avg=5729.00, stdev=586.90, samples=2 00:11:21.596 lat (usec) : 500=0.01% 00:11:21.596 lat (msec) : 4=0.28%, 10=3.63%, 20=96.08% 00:11:21.596 cpu : usr=3.70%, sys=15.68%, ctx=361, majf=0, minf=19 00:11:21.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:21.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.596 issued rwts: total=5632,5857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.596 job2: (groupid=0, jobs=1): err= 0: pid=79282: Tue Dec 10 10:24:56 2024 00:11:21.596 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:11:21.596 slat (usec): min=6, max=7135, avg=190.61, stdev=781.08 00:11:21.596 clat (usec): min=17347, max=31279, avg=23435.05, stdev=2348.20 00:11:21.596 lat (usec): min=17618, max=31422, avg=23625.66, stdev=2429.72 00:11:21.596 clat percentiles (usec): 00:11:21.596 | 1.00th=[17957], 5.00th=[19268], 10.00th=[19792], 20.00th=[22676], 00:11:21.596 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:11:21.596 | 70.00th=[23725], 80.00th=[24249], 90.00th=[27132], 95.00th=[27919], 00:11:21.596 | 99.00th=[29754], 99.50th=[30278], 99.90th=[31065], 99.95th=[31065], 00:11:21.596 | 99.99th=[31327] 00:11:21.596 write: IOPS=2793, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1006msec); 0 zone resets 00:11:21.596 slat (usec): min=16, max=6731, avg=173.31, stdev=558.33 00:11:21.596 clat (usec): min=5352, max=31083, avg=23740.94, stdev=2837.96 00:11:21.596 lat (usec): min=6737, max=31105, avg=23914.25, stdev=2849.05 00:11:21.596 clat percentiles (usec): 00:11:21.596 | 1.00th=[11338], 5.00th=[19792], 10.00th=[21890], 20.00th=[22676], 00:11:21.596 | 30.00th=[22938], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:11:21.596 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27395], 95.00th=[28967], 00:11:21.596 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31065], 99.95th=[31065], 00:11:21.596 | 99.99th=[31065] 00:11:21.596 bw ( KiB/s): min= 9304, max=12176, per=16.27%, avg=10740.00, stdev=2030.81, samples=2 00:11:21.596 iops : min= 2326, max= 3044, avg=2685.00, stdev=507.70, samples=2 00:11:21.596 lat (msec) : 10=0.47%, 20=7.45%, 50=92.09% 00:11:21.596 cpu : usr=2.99%, sys=9.15%, ctx=414, majf=0, minf=13 00:11:21.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:21.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.596 issued rwts: total=2560,2810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.596 job3: (groupid=0, jobs=1): err= 0: pid=79283: Tue Dec 10 10:24:56 2024 00:11:21.596 read: IOPS=4873, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1002msec) 00:11:21.596 slat (usec): min=4, max=4136, avg=98.05, stdev=388.99 00:11:21.596 clat (usec): min=458, max=16940, avg=12937.48, stdev=1350.09 00:11:21.596 lat (usec): min=1407, max=16964, avg=13035.53, stdev=1380.59 00:11:21.596 clat percentiles (usec): 00:11:21.596 | 1.00th=[ 7832], 5.00th=[11076], 10.00th=[12125], 20.00th=[12649], 00:11:21.596 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:11:21.596 | 70.00th=[13173], 80.00th=[13304], 90.00th=[14353], 95.00th=[14877], 00:11:21.596 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16712], 99.95th=[16712], 00:11:21.596 | 99.99th=[16909] 00:11:21.596 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:21.596 slat (usec): min=10, max=3338, avg=93.77, stdev=423.13 00:11:21.596 clat (usec): min=9256, max=16668, avg=12378.16, stdev=885.74 00:11:21.596 lat (usec): min=9290, max=16684, avg=12471.93, stdev=970.04 00:11:21.596 clat percentiles (usec): 00:11:21.596 | 1.00th=[10421], 5.00th=[11469], 10.00th=[11731], 20.00th=[11863], 00:11:21.596 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:11:21.596 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[14615], 00:11:21.596 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16450], 99.95th=[16581], 00:11:21.596 | 99.99th=[16712] 00:11:21.596 bw ( KiB/s): min=20480, max=20521, per=31.06%, avg=20500.50, stdev=28.99, samples=2 00:11:21.596 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:11:21.596 lat (usec) : 500=0.01% 00:11:21.596 lat (msec) : 2=0.10%, 4=0.07%, 10=1.06%, 20=98.76% 00:11:21.596 cpu : usr=5.19%, sys=14.49%, ctx=405, majf=0, minf=11 00:11:21.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:21.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.596 issued rwts: total=4883,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.596 00:11:21.596 Run status group 0 (all jobs): 00:11:21.596 READ: bw=60.7MiB/s (63.7MB/s), 9.94MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=61.1MiB (64.0MB), run=1002-1006msec 00:11:21.596 WRITE: bw=64.4MiB/s (67.6MB/s), 10.9MiB/s-22.8MiB/s (11.4MB/s-23.9MB/s), io=64.8MiB (68.0MB), run=1002-1006msec 00:11:21.596 00:11:21.596 Disk stats (read/write): 00:11:21.596 nvme0n1: ios=2098/2559, merge=0/0, ticks=16080/18852, in_queue=34932, util=88.47% 00:11:21.596 nvme0n2: ios=4881/5120, merge=0/0, ticks=12616/11741, in_queue=24357, util=88.89% 00:11:21.596 nvme0n3: ios=2080/2559, merge=0/0, ticks=16102/18925, in_queue=35027, util=89.83% 00:11:21.596 nvme0n4: ios=4096/4564, merge=0/0, ticks=16875/15834, in_queue=32709, util=89.77% 00:11:21.596 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:21.596 [global] 00:11:21.596 thread=1 00:11:21.596 invalidate=1 00:11:21.596 rw=randwrite 00:11:21.596 time_based=1 00:11:21.596 runtime=1 00:11:21.596 ioengine=libaio 00:11:21.596 direct=1 00:11:21.596 bs=4096 00:11:21.596 iodepth=128 00:11:21.596 norandommap=0 00:11:21.596 numjobs=1 00:11:21.596 00:11:21.596 verify_dump=1 00:11:21.596 verify_backlog=512 00:11:21.596 verify_state_save=0 00:11:21.596 do_verify=1 00:11:21.596 verify=crc32c-intel 00:11:21.596 [job0] 00:11:21.596 filename=/dev/nvme0n1 00:11:21.596 [job1] 00:11:21.596 filename=/dev/nvme0n2 00:11:21.596 [job2] 00:11:21.596 filename=/dev/nvme0n3 00:11:21.596 [job3] 00:11:21.596 filename=/dev/nvme0n4 00:11:21.596 Could not set queue depth (nvme0n1) 00:11:21.596 Could not set queue depth (nvme0n2) 00:11:21.596 Could not set queue depth (nvme0n3) 00:11:21.596 Could not set queue depth (nvme0n4) 00:11:21.596 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.596 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.596 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.596 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.596 fio-3.35 00:11:21.596 Starting 4 threads 00:11:22.973 00:11:22.973 job0: (groupid=0, jobs=1): err= 0: pid=79341: Tue Dec 10 10:24:57 2024 00:11:22.973 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:11:22.973 slat (usec): min=8, max=5192, avg=87.65, stdev=398.40 00:11:22.973 clat (usec): min=7874, max=16075, avg=11371.54, stdev=845.09 00:11:22.973 lat (usec): min=8650, max=19634, avg=11459.19, stdev=866.43 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10290], 20.00th=[10945], 00:11:22.973 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:11:22.973 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12911], 00:11:22.973 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15270], 99.95th=[15795], 00:11:22.973 | 99.99th=[16057] 00:11:22.973 write: IOPS=6052, BW=23.6MiB/s (24.8MB/s)(23.7MiB/1004msec); 0 zone resets 00:11:22.973 slat (usec): min=12, max=4249, avg=77.02, stdev=434.38 00:11:22.973 clat (usec): min=3200, max=15020, avg=10359.96, stdev=1112.82 00:11:22.973 lat (usec): min=3220, max=15052, avg=10436.98, stdev=1184.81 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10028], 00:11:22.973 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:11:22.973 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10945], 95.00th=[12256], 00:11:22.973 | 99.00th=[13829], 99.50th=[14222], 99.90th=[14615], 99.95th=[14746], 00:11:22.973 | 99.99th=[15008] 00:11:22.973 bw ( KiB/s): min=23024, max=24526, per=36.44%, avg=23775.00, stdev=1062.07, samples=2 00:11:22.973 iops : min= 5756, max= 6131, avg=5943.50, stdev=265.17, samples=2 00:11:22.973 lat (msec) : 4=0.29%, 10=12.60%, 20=87.11% 00:11:22.973 cpu : usr=4.39%, sys=14.26%, ctx=383, majf=0, minf=11 00:11:22.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:22.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.973 issued rwts: total=5632,6077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.973 job1: (groupid=0, jobs=1): err= 0: pid=79342: Tue Dec 10 10:24:57 2024 00:11:22.973 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:11:22.973 slat (usec): min=7, max=10728, avg=195.77, stdev=804.99 00:11:22.973 clat (usec): min=14068, max=34998, avg=24264.37, stdev=2333.90 00:11:22.973 lat (usec): min=14096, max=35018, avg=24460.14, stdev=2362.36 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[17171], 5.00th=[20579], 10.00th=[21627], 20.00th=[23462], 00:11:22.973 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:11:22.973 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26608], 95.00th=[27919], 00:11:22.973 | 99.00th=[31327], 99.50th=[31327], 99.90th=[33162], 99.95th=[34341], 00:11:22.973 | 99.99th=[34866] 00:11:22.973 write: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1006msec); 0 zone resets 00:11:22.973 slat (usec): min=5, max=9510, avg=168.50, stdev=830.39 00:11:22.973 clat (usec): min=2466, max=33222, avg=22932.08, stdev=3397.46 00:11:22.973 lat (usec): min=7459, max=33246, avg=23100.58, stdev=3370.66 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[ 8225], 5.00th=[16581], 10.00th=[19530], 20.00th=[22414], 00:11:22.973 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:11:22.973 | 70.00th=[23725], 80.00th=[24249], 90.00th=[26346], 95.00th=[28967], 00:11:22.973 | 99.00th=[31065], 99.50th=[31327], 99.90th=[33162], 99.95th=[33162], 00:11:22.973 | 99.99th=[33162] 00:11:22.973 bw ( KiB/s): min= 9488, max=12288, per=16.69%, avg=10888.00, stdev=1979.90, samples=2 00:11:22.973 iops : min= 2372, max= 3072, avg=2722.00, stdev=494.97, samples=2 00:11:22.973 lat (msec) : 4=0.02%, 10=0.61%, 20=7.12%, 50=92.26% 00:11:22.973 cpu : usr=2.29%, sys=7.86%, ctx=491, majf=0, minf=9 00:11:22.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:22.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.973 issued rwts: total=2560,2850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.973 job2: (groupid=0, jobs=1): err= 0: pid=79343: Tue Dec 10 10:24:57 2024 00:11:22.973 read: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1006msec) 00:11:22.973 slat (usec): min=9, max=6979, avg=105.15, stdev=671.81 00:11:22.973 clat (usec): min=1358, max=23219, avg=14517.25, stdev=1808.86 00:11:22.973 lat (usec): min=6370, max=27640, avg=14622.40, stdev=1828.45 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[13960], 20.00th=[14222], 00:11:22.973 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:11:22.973 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[15664], 00:11:22.973 | 99.00th=[22152], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:11:22.973 | 99.99th=[23200] 00:11:22.973 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:11:22.973 slat (usec): min=9, max=9760, avg=106.11, stdev=644.88 00:11:22.973 clat (usec): min=7232, max=18272, avg=13321.75, stdev=1177.27 00:11:22.973 lat (usec): min=9723, max=18496, avg=13427.86, stdev=1026.27 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[ 8586], 5.00th=[11863], 10.00th=[12256], 20.00th=[12780], 00:11:22.973 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:11:22.973 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14222], 95.00th=[14353], 00:11:22.973 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:11:22.973 | 99.99th=[18220] 00:11:22.973 bw ( KiB/s): min=17416, max=19448, per=28.25%, avg=18432.00, stdev=1436.84, samples=2 00:11:22.973 iops : min= 4354, max= 4862, avg=4608.00, stdev=359.21, samples=2 00:11:22.973 lat (msec) : 2=0.01%, 10=3.48%, 20=95.78%, 50=0.73% 00:11:22.973 cpu : usr=3.98%, sys=12.54%, ctx=195, majf=0, minf=11 00:11:22.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:22.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.973 issued rwts: total=4555,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.973 job3: (groupid=0, jobs=1): err= 0: pid=79344: Tue Dec 10 10:24:57 2024 00:11:22.973 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:11:22.973 slat (usec): min=7, max=9012, avg=182.66, stdev=732.84 00:11:22.973 clat (usec): min=14161, max=35006, avg=24320.28, stdev=1813.05 00:11:22.973 lat (usec): min=15539, max=35027, avg=24502.94, stdev=1844.11 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[19006], 5.00th=[21103], 10.00th=[22152], 20.00th=[23725], 00:11:22.973 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:11:22.973 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26084], 95.00th=[26870], 00:11:22.973 | 99.00th=[30540], 99.50th=[31589], 99.90th=[34866], 99.95th=[34866], 00:11:22.973 | 99.99th=[34866] 00:11:22.973 write: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1003msec); 0 zone resets 00:11:22.973 slat (usec): min=5, max=9779, avg=178.05, stdev=878.09 00:11:22.973 clat (usec): min=1996, max=32624, avg=22044.87, stdev=3204.82 00:11:22.973 lat (usec): min=2057, max=33050, avg=22222.92, stdev=3194.15 00:11:22.973 clat percentiles (usec): 00:11:22.973 | 1.00th=[ 6915], 5.00th=[17957], 10.00th=[19006], 20.00th=[21365], 00:11:22.973 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:11:22.973 | 70.00th=[23462], 80.00th=[23462], 90.00th=[23725], 95.00th=[23987], 00:11:22.973 | 99.00th=[26608], 99.50th=[28443], 99.90th=[30802], 99.95th=[32113], 00:11:22.973 | 99.99th=[32637] 00:11:22.973 bw ( KiB/s): min= 9680, max=12263, per=16.82%, avg=10971.50, stdev=1826.46, samples=2 00:11:22.973 iops : min= 2422, max= 3065, avg=2743.50, stdev=454.67, samples=2 00:11:22.973 lat (msec) : 2=0.02%, 4=0.48%, 10=0.59%, 20=6.66%, 50=92.25% 00:11:22.973 cpu : usr=2.50%, sys=7.98%, ctx=444, majf=0, minf=15 00:11:22.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:22.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.973 issued rwts: total=2560,2872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.973 00:11:22.973 Run status group 0 (all jobs): 00:11:22.973 READ: bw=59.4MiB/s (62.3MB/s), 9.94MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=59.8MiB (62.7MB), run=1003-1006msec 00:11:22.973 WRITE: bw=63.7MiB/s (66.8MB/s), 11.1MiB/s-23.6MiB/s (11.6MB/s-24.8MB/s), io=64.1MiB (67.2MB), run=1003-1006msec 00:11:22.973 00:11:22.973 Disk stats (read/write): 00:11:22.973 nvme0n1: ios=4929/5120, merge=0/0, ticks=26848/21677, in_queue=48525, util=88.47% 00:11:22.973 nvme0n2: ios=2093/2560, merge=0/0, ticks=24861/26373, in_queue=51234, util=88.82% 00:11:22.973 nvme0n3: ios=3652/4096, merge=0/0, ticks=50632/50648, in_queue=101280, util=89.14% 00:11:22.973 nvme0n4: ios=2048/2553, merge=0/0, ticks=24163/26631, in_queue=50794, util=87.83% 00:11:22.973 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:22.973 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=79357 00:11:22.973 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.973 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:22.973 [global] 00:11:22.973 thread=1 00:11:22.973 invalidate=1 00:11:22.973 rw=read 00:11:22.973 time_based=1 00:11:22.973 runtime=10 00:11:22.973 ioengine=libaio 00:11:22.973 direct=1 00:11:22.973 bs=4096 00:11:22.973 iodepth=1 00:11:22.973 norandommap=1 00:11:22.973 numjobs=1 00:11:22.973 00:11:22.973 [job0] 00:11:22.973 filename=/dev/nvme0n1 00:11:22.973 [job1] 00:11:22.973 filename=/dev/nvme0n2 00:11:22.973 [job2] 00:11:22.973 filename=/dev/nvme0n3 00:11:22.973 [job3] 00:11:22.973 filename=/dev/nvme0n4 00:11:22.973 Could not set queue depth (nvme0n1) 00:11:22.973 Could not set queue depth (nvme0n2) 00:11:22.973 Could not set queue depth (nvme0n3) 00:11:22.974 Could not set queue depth (nvme0n4) 00:11:22.974 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.974 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.974 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.974 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.974 fio-3.35 00:11:22.974 Starting 4 threads 00:11:26.259 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:26.259 fio: pid=79400, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.259 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41287680, buflen=4096 00:11:26.259 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:26.518 fio: pid=79399, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.518 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=71725056, buflen=4096 00:11:26.518 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.518 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:26.777 fio: pid=79397, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.777 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55263232, buflen=4096 00:11:26.777 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.777 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:27.036 fio: pid=79398, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:27.036 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=20758528, buflen=4096 00:11:27.036 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.036 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:27.036 00:11:27.036 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79397: Tue Dec 10 10:25:02 2024 00:11:27.036 read: IOPS=3828, BW=15.0MiB/s (15.7MB/s)(52.7MiB/3524msec) 00:11:27.036 slat (usec): min=11, max=10748, avg=17.60, stdev=155.03 00:11:27.036 clat (usec): min=125, max=2721, avg=242.07, stdev=65.00 00:11:27.036 lat (usec): min=137, max=10922, avg=259.67, stdev=166.96 00:11:27.036 clat percentiles (usec): 00:11:27.036 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 167], 00:11:27.036 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 265], 00:11:27.036 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:11:27.036 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 652], 99.95th=[ 1205], 00:11:27.036 | 99.99th=[ 1942] 00:11:27.036 bw ( KiB/s): min=13752, max=14358, per=21.60%, avg=14182.33, stdev=226.38, samples=6 00:11:27.036 iops : min= 3438, max= 3589, avg=3545.50, stdev=56.52, samples=6 00:11:27.036 lat (usec) : 250=35.91%, 500=63.95%, 750=0.05%, 1000=0.02% 00:11:27.036 lat (msec) : 2=0.04%, 4=0.01% 00:11:27.036 cpu : usr=1.05%, sys=4.97%, ctx=13498, majf=0, minf=1 00:11:27.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.036 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.036 issued rwts: total=13493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.036 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79398: Tue Dec 10 10:25:02 2024 00:11:27.036 read: IOPS=5630, BW=22.0MiB/s (23.1MB/s)(83.8MiB/3810msec) 00:11:27.036 slat (usec): min=10, max=16719, avg=16.63, stdev=209.02 00:11:27.036 clat (usec): min=120, max=4365, avg=159.78, stdev=69.28 00:11:27.036 lat (usec): min=131, max=16989, avg=176.41, stdev=221.09 00:11:27.036 clat percentiles (usec): 00:11:27.036 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:11:27.036 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:11:27.036 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:11:27.036 | 99.00th=[ 198], 99.50th=[ 289], 99.90th=[ 824], 99.95th=[ 1582], 00:11:27.036 | 99.99th=[ 3490] 00:11:27.036 bw ( KiB/s): min=20508, max=23496, per=34.43%, avg=22603.86, stdev=1075.95, samples=7 00:11:27.036 iops : min= 5127, max= 5874, avg=5650.86, stdev=269.03, samples=7 00:11:27.036 lat (usec) : 250=99.43%, 500=0.36%, 750=0.09%, 1000=0.03% 00:11:27.036 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01% 00:11:27.036 cpu : usr=1.97%, sys=6.33%, ctx=21472, majf=0, minf=1 00:11:27.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.036 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.036 issued rwts: total=21453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.036 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79399: Tue Dec 10 10:25:02 2024 00:11:27.036 read: IOPS=5352, BW=20.9MiB/s (21.9MB/s)(68.4MiB/3272msec) 00:11:27.036 slat (usec): min=10, max=11170, avg=14.25, stdev=107.45 00:11:27.036 clat (usec): min=140, max=1209, avg=171.31, stdev=17.80 00:11:27.036 lat (usec): min=152, max=11364, avg=185.56, stdev=109.25 00:11:27.036 clat percentiles (usec): 00:11:27.036 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:11:27.037 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:11:27.037 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:11:27.037 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 273], 99.95th=[ 429], 00:11:27.037 | 99.99th=[ 1012] 00:11:27.037 bw ( KiB/s): min=21272, max=21696, per=32.85%, avg=21570.83, stdev=169.88, samples=6 00:11:27.037 iops : min= 5318, max= 5424, avg=5392.67, stdev=42.44, samples=6 00:11:27.037 lat (usec) : 250=99.85%, 500=0.11%, 750=0.02% 00:11:27.037 lat (msec) : 2=0.01% 00:11:27.037 cpu : usr=1.62%, sys=5.75%, ctx=17514, majf=0, minf=2 00:11:27.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.037 issued rwts: total=17512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.037 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79400: Tue Dec 10 10:25:02 2024 00:11:27.037 read: IOPS=3397, BW=13.3MiB/s (13.9MB/s)(39.4MiB/2967msec) 00:11:27.037 slat (usec): min=11, max=452, avg=14.11, stdev= 6.19 00:11:27.037 clat (usec): min=151, max=6126, avg=278.76, stdev=174.35 00:11:27.037 lat (usec): min=164, max=6143, avg=292.87, stdev=174.82 00:11:27.037 clat percentiles (usec): 00:11:27.037 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:11:27.037 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:11:27.037 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:11:27.037 | 99.00th=[ 347], 99.50th=[ 404], 99.90th=[ 3720], 99.95th=[ 4621], 00:11:27.037 | 99.99th=[ 6128] 00:11:27.037 bw ( KiB/s): min=12880, max=14288, per=20.80%, avg=13657.60, stdev=550.07, samples=5 00:11:27.037 iops : min= 3220, max= 3572, avg=3414.40, stdev=137.52, samples=5 00:11:27.037 lat (usec) : 250=10.97%, 500=88.68%, 750=0.07%, 1000=0.04% 00:11:27.037 lat (msec) : 2=0.04%, 4=0.11%, 10=0.08% 00:11:27.037 cpu : usr=0.84%, sys=4.18%, ctx=10081, majf=0, minf=1 00:11:27.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.037 issued rwts: total=10081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.037 00:11:27.037 Run status group 0 (all jobs): 00:11:27.037 READ: bw=64.1MiB/s (67.2MB/s), 13.3MiB/s-22.0MiB/s (13.9MB/s-23.1MB/s), io=244MiB (256MB), run=2967-3810msec 00:11:27.037 00:11:27.037 Disk stats (read/write): 00:11:27.037 nvme0n1: ios=12603/0, merge=0/0, ticks=3179/0, in_queue=3179, util=95.48% 00:11:27.037 nvme0n2: ios=20369/0, merge=0/0, ticks=3302/0, in_queue=3302, util=95.00% 00:11:27.037 nvme0n3: ios=16699/0, merge=0/0, ticks=2889/0, in_queue=2889, util=96.31% 00:11:27.037 nvme0n4: ios=9745/0, merge=0/0, ticks=2723/0, in_queue=2723, util=96.33% 00:11:27.295 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.295 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:27.554 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.554 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:27.813 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.813 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.072 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.072 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 79357 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:28.331 nvmf hotplug test: fio failed as expected 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:28.331 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.899 rmmod nvme_tcp 00:11:28.899 rmmod nvme_fabrics 00:11:28.899 rmmod nvme_keyring 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78977 ']' 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78977 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78977 ']' 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78977 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78977 00:11:28.899 killing process with pid 78977 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78977' 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78977 00:11:28.899 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78977 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:28.899 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.158 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:29.158 ************************************ 00:11:29.158 END TEST nvmf_fio_target 00:11:29.158 ************************************ 00:11:29.158 00:11:29.158 real 0m19.601s 00:11:29.158 user 1m13.275s 00:11:29.158 sys 0m10.375s 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.159 ************************************ 00:11:29.159 START TEST nvmf_bdevio 00:11:29.159 ************************************ 00:11:29.159 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.419 * Looking for test storage... 00:11:29.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.419 --rc genhtml_branch_coverage=1 00:11:29.419 --rc genhtml_function_coverage=1 00:11:29.419 --rc genhtml_legend=1 00:11:29.419 --rc geninfo_all_blocks=1 00:11:29.419 --rc geninfo_unexecuted_blocks=1 00:11:29.419 00:11:29.419 ' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.419 --rc genhtml_branch_coverage=1 00:11:29.419 --rc genhtml_function_coverage=1 00:11:29.419 --rc genhtml_legend=1 00:11:29.419 --rc geninfo_all_blocks=1 00:11:29.419 --rc geninfo_unexecuted_blocks=1 00:11:29.419 00:11:29.419 ' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.419 --rc genhtml_branch_coverage=1 00:11:29.419 --rc genhtml_function_coverage=1 00:11:29.419 --rc genhtml_legend=1 00:11:29.419 --rc geninfo_all_blocks=1 00:11:29.419 --rc geninfo_unexecuted_blocks=1 00:11:29.419 00:11:29.419 ' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.419 --rc genhtml_branch_coverage=1 00:11:29.419 --rc genhtml_function_coverage=1 00:11:29.419 --rc genhtml_legend=1 00:11:29.419 --rc geninfo_all_blocks=1 00:11:29.419 --rc geninfo_unexecuted_blocks=1 00:11:29.419 00:11:29.419 ' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.419 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:29.420 Cannot find device "nvmf_init_br" 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:29.420 Cannot find device "nvmf_init_br2" 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:29.420 Cannot find device "nvmf_tgt_br" 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.420 Cannot find device "nvmf_tgt_br2" 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:29.420 Cannot find device "nvmf_init_br" 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:29.420 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:29.679 Cannot find device "nvmf_init_br2" 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:29.679 Cannot find device "nvmf_tgt_br" 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:29.679 Cannot find device "nvmf_tgt_br2" 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:29.679 Cannot find device "nvmf_br" 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:29.679 Cannot find device "nvmf_init_if" 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:29.679 Cannot find device "nvmf_init_if2" 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:29.679 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:29.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:29.938 00:11:29.938 --- 10.0.0.3 ping statistics --- 00:11:29.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.938 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:29.938 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:29.938 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:29.938 00:11:29.938 --- 10.0.0.4 ping statistics --- 00:11:29.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.938 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:29.938 00:11:29.938 --- 10.0.0.1 ping statistics --- 00:11:29.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.938 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:29.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:29.938 00:11:29.938 --- 10.0.0.2 ping statistics --- 00:11:29.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.938 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:29.938 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:29.938 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=79720 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 79720 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 79720 ']' 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.939 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 [2024-12-10 10:25:05.070198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:29.939 [2024-12-10 10:25:05.070495] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.197 [2024-12-10 10:25:05.210304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.197 [2024-12-10 10:25:05.253841] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.197 [2024-12-10 10:25:05.254343] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.197 [2024-12-10 10:25:05.255047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.197 [2024-12-10 10:25:05.255636] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.197 [2024-12-10 10:25:05.255982] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.197 [2024-12-10 10:25:05.256485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:30.197 [2024-12-10 10:25:05.256609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:30.197 [2024-12-10 10:25:05.256660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:30.197 [2024-12-10 10:25:05.256662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.197 [2024-12-10 10:25:05.290143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.197 [2024-12-10 10:25:05.390689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.197 Malloc0 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.197 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.456 [2024-12-10 10:25:05.439983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:30.456 { 00:11:30.456 "params": { 00:11:30.456 "name": "Nvme$subsystem", 00:11:30.456 "trtype": "$TEST_TRANSPORT", 00:11:30.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:30.456 "adrfam": "ipv4", 00:11:30.456 "trsvcid": "$NVMF_PORT", 00:11:30.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:30.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:30.456 "hdgst": ${hdgst:-false}, 00:11:30.456 "ddgst": ${ddgst:-false} 00:11:30.456 }, 00:11:30.456 "method": "bdev_nvme_attach_controller" 00:11:30.456 } 00:11:30.456 EOF 00:11:30.456 )") 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:30.456 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:30.456 "params": { 00:11:30.456 "name": "Nvme1", 00:11:30.456 "trtype": "tcp", 00:11:30.456 "traddr": "10.0.0.3", 00:11:30.456 "adrfam": "ipv4", 00:11:30.456 "trsvcid": "4420", 00:11:30.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:30.456 "hdgst": false, 00:11:30.456 "ddgst": false 00:11:30.456 }, 00:11:30.456 "method": "bdev_nvme_attach_controller" 00:11:30.456 }' 00:11:30.456 [2024-12-10 10:25:05.496026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:30.456 [2024-12-10 10:25:05.496140] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79748 ] 00:11:30.456 [2024-12-10 10:25:05.639988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.715 [2024-12-10 10:25:05.685834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.715 [2024-12-10 10:25:05.685977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.715 [2024-12-10 10:25:05.685985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.715 [2024-12-10 10:25:05.728839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.715 I/O targets: 00:11:30.715 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:30.715 00:11:30.715 00:11:30.715 CUnit - A unit testing framework for C - Version 2.1-3 00:11:30.715 http://cunit.sourceforge.net/ 00:11:30.715 00:11:30.715 00:11:30.715 Suite: bdevio tests on: Nvme1n1 00:11:30.715 Test: blockdev write read block ...passed 00:11:30.715 Test: blockdev write zeroes read block ...passed 00:11:30.715 Test: blockdev write zeroes read no split ...passed 00:11:30.715 Test: blockdev write zeroes read split ...passed 00:11:30.715 Test: blockdev write zeroes read split partial ...passed 00:11:30.715 Test: blockdev reset ...[2024-12-10 10:25:05.857408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:30.715 [2024-12-10 10:25:05.857852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe8b40 (9): Bad file descriptor 00:11:30.715 [2024-12-10 10:25:05.876211] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:30.715 passed 00:11:30.715 Test: blockdev write read 8 blocks ...passed 00:11:30.715 Test: blockdev write read size > 128k ...passed 00:11:30.715 Test: blockdev write read invalid size ...passed 00:11:30.715 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:30.715 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:30.715 Test: blockdev write read max offset ...passed 00:11:30.715 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:30.715 Test: blockdev writev readv 8 blocks ...passed 00:11:30.715 Test: blockdev writev readv 30 x 1block ...passed 00:11:30.715 Test: blockdev writev readv block ...passed 00:11:30.715 Test: blockdev writev readv size > 128k ...passed 00:11:30.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:30.715 Test: blockdev comparev and writev ...[2024-12-10 10:25:05.884431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.884605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.884634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.884646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.884956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.884979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.884997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.885008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.885288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.885309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.885326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.885336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.885808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.885959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.886119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:30.715 [2024-12-10 10:25:05.886262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:30.715 passed 00:11:30.715 Test: blockdev nvme passthru rw ...passed 00:11:30.715 Test: blockdev nvme passthru vendor specific ...[2024-12-10 10:25:05.887482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.715 [2024-12-10 10:25:05.887510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.887632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.715 [2024-12-10 10:25:05.887657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.887764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.715 [2024-12-10 10:25:05.887784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:30.715 [2024-12-10 10:25:05.887881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:30.715 [2024-12-10 10:25:05.887910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:30.715 passed 00:11:30.715 Test: blockdev nvme admin passthru ...passed 00:11:30.715 Test: blockdev copy ...passed 00:11:30.715 00:11:30.715 Run Summary: Type Total Ran Passed Failed Inactive 00:11:30.715 suites 1 1 n/a 0 0 00:11:30.715 tests 23 23 23 0 0 00:11:30.715 asserts 152 152 152 0 n/a 00:11:30.715 00:11:30.715 Elapsed time = 0.163 seconds 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.983 rmmod nvme_tcp 00:11:30.983 rmmod nvme_fabrics 00:11:30.983 rmmod nvme_keyring 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 79720 ']' 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 79720 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 79720 ']' 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 79720 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79720 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:30.983 killing process with pid 79720 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79720' 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 79720 00:11:30.983 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 79720 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:31.257 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:31.516 00:11:31.516 real 0m2.252s 00:11:31.516 user 0m5.464s 00:11:31.516 sys 0m0.791s 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.516 ************************************ 00:11:31.516 END TEST nvmf_bdevio 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.516 ************************************ 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:31.516 00:11:31.516 real 2m28.262s 00:11:31.516 user 6m25.948s 00:11:31.516 sys 0m52.742s 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.516 10:25:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.516 ************************************ 00:11:31.517 END TEST nvmf_target_core 00:11:31.517 ************************************ 00:11:31.517 10:25:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:31.517 10:25:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.517 10:25:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.517 10:25:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.517 ************************************ 00:11:31.517 START TEST nvmf_target_extra 00:11:31.517 ************************************ 00:11:31.517 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:31.776 * Looking for test storage... 00:11:31.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.776 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.777 --rc genhtml_branch_coverage=1 00:11:31.777 --rc genhtml_function_coverage=1 00:11:31.777 --rc genhtml_legend=1 00:11:31.777 --rc geninfo_all_blocks=1 00:11:31.777 --rc geninfo_unexecuted_blocks=1 00:11:31.777 00:11:31.777 ' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.777 --rc genhtml_branch_coverage=1 00:11:31.777 --rc genhtml_function_coverage=1 00:11:31.777 --rc genhtml_legend=1 00:11:31.777 --rc geninfo_all_blocks=1 00:11:31.777 --rc geninfo_unexecuted_blocks=1 00:11:31.777 00:11:31.777 ' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.777 --rc genhtml_branch_coverage=1 00:11:31.777 --rc genhtml_function_coverage=1 00:11:31.777 --rc genhtml_legend=1 00:11:31.777 --rc geninfo_all_blocks=1 00:11:31.777 --rc geninfo_unexecuted_blocks=1 00:11:31.777 00:11:31.777 ' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:31.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.777 --rc genhtml_branch_coverage=1 00:11:31.777 --rc genhtml_function_coverage=1 00:11:31.777 --rc genhtml_legend=1 00:11:31.777 --rc geninfo_all_blocks=1 00:11:31.777 --rc geninfo_unexecuted_blocks=1 00:11:31.777 00:11:31.777 ' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.777 ************************************ 00:11:31.777 START TEST nvmf_auth_target 00:11:31.777 ************************************ 00:11:31.777 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:32.037 * Looking for test storage... 00:11:32.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.038 --rc genhtml_branch_coverage=1 00:11:32.038 --rc genhtml_function_coverage=1 00:11:32.038 --rc genhtml_legend=1 00:11:32.038 --rc geninfo_all_blocks=1 00:11:32.038 --rc geninfo_unexecuted_blocks=1 00:11:32.038 00:11:32.038 ' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.038 --rc genhtml_branch_coverage=1 00:11:32.038 --rc genhtml_function_coverage=1 00:11:32.038 --rc genhtml_legend=1 00:11:32.038 --rc geninfo_all_blocks=1 00:11:32.038 --rc geninfo_unexecuted_blocks=1 00:11:32.038 00:11:32.038 ' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.038 --rc genhtml_branch_coverage=1 00:11:32.038 --rc genhtml_function_coverage=1 00:11:32.038 --rc genhtml_legend=1 00:11:32.038 --rc geninfo_all_blocks=1 00:11:32.038 --rc geninfo_unexecuted_blocks=1 00:11:32.038 00:11:32.038 ' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.038 --rc genhtml_branch_coverage=1 00:11:32.038 --rc genhtml_function_coverage=1 00:11:32.038 --rc genhtml_legend=1 00:11:32.038 --rc geninfo_all_blocks=1 00:11:32.038 --rc geninfo_unexecuted_blocks=1 00:11:32.038 00:11:32.038 ' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.038 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.039 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:32.039 Cannot find device "nvmf_init_br" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:32.039 Cannot find device "nvmf_init_br2" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:32.039 Cannot find device "nvmf_tgt_br" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.039 Cannot find device "nvmf_tgt_br2" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:32.039 Cannot find device "nvmf_init_br" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:32.039 Cannot find device "nvmf_init_br2" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:32.039 Cannot find device "nvmf_tgt_br" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:32.039 Cannot find device "nvmf_tgt_br2" 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:32.039 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:32.298 Cannot find device "nvmf_br" 00:11:32.298 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:32.299 Cannot find device "nvmf_init_if" 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:32.299 Cannot find device "nvmf_init_if2" 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:32.299 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:32.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:32.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:11:32.558 00:11:32.558 --- 10.0.0.3 ping statistics --- 00:11:32.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.558 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:32.558 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:32.558 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:11:32.558 00:11:32.558 --- 10.0.0.4 ping statistics --- 00:11:32.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.558 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:32.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:32.558 00:11:32.558 --- 10.0.0.1 ping statistics --- 00:11:32.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.558 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:32.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:32.558 00:11:32.558 --- 10.0.0.2 ping statistics --- 00:11:32.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.558 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:11:32.558 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=80038 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 80038 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80038 ']' 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.559 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=80057 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=d3c3e8b43e16283eca61951b2487341d37617504fb3595b1 00:11:32.818 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.SzX 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key d3c3e8b43e16283eca61951b2487341d37617504fb3595b1 0 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 d3c3e8b43e16283eca61951b2487341d37617504fb3595b1 0 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=d3c3e8b43e16283eca61951b2487341d37617504fb3595b1 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:11:32.818 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.SzX 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.SzX 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.SzX 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:33.077 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=446ea8b9762fe5d64daaace56039bb663b9be379b492a3a82e283a44b882bea3 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.aLl 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 446ea8b9762fe5d64daaace56039bb663b9be379b492a3a82e283a44b882bea3 3 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 446ea8b9762fe5d64daaace56039bb663b9be379b492a3a82e283a44b882bea3 3 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=446ea8b9762fe5d64daaace56039bb663b9be379b492a3a82e283a44b882bea3 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.aLl 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.aLl 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.aLl 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bc7845eed71bb0fdf7c2052bf15ae725 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.T5T 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bc7845eed71bb0fdf7c2052bf15ae725 1 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bc7845eed71bb0fdf7c2052bf15ae725 1 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bc7845eed71bb0fdf7c2052bf15ae725 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.T5T 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.T5T 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.T5T 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6078d815fba5002088e103ca861dcffc59ee77789996032c 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.hvJ 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6078d815fba5002088e103ca861dcffc59ee77789996032c 2 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6078d815fba5002088e103ca861dcffc59ee77789996032c 2 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6078d815fba5002088e103ca861dcffc59ee77789996032c 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.hvJ 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.hvJ 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hvJ 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a037f8719d1aa1b98d236f4c2e969596df7d3ce39144db8f 00:11:33.078 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.tZ1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a037f8719d1aa1b98d236f4c2e969596df7d3ce39144db8f 2 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a037f8719d1aa1b98d236f4c2e969596df7d3ce39144db8f 2 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a037f8719d1aa1b98d236f4c2e969596df7d3ce39144db8f 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.tZ1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.tZ1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.tZ1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e38e73dc2294fc2ab2bf877dd9115715 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.1m4 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e38e73dc2294fc2ab2bf877dd9115715 1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e38e73dc2294fc2ab2bf877dd9115715 1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e38e73dc2294fc2ab2bf877dd9115715 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.1m4 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.1m4 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1m4 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=850e329a306ca2b599ba5359d22a51d44dd62b2e1aef6ea8989b06de079554c9 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.uN6 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 850e329a306ca2b599ba5359d22a51d44dd62b2e1aef6ea8989b06de079554c9 3 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 850e329a306ca2b599ba5359d22a51d44dd62b2e1aef6ea8989b06de079554c9 3 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:33.337 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=850e329a306ca2b599ba5359d22a51d44dd62b2e1aef6ea8989b06de079554c9 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.uN6 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.uN6 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.uN6 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 80038 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80038 ']' 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.338 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 80057 /var/tmp/host.sock 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80057 ']' 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:33.596 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.597 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.162 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SzX 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SzX 00:11:34.163 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SzX 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.aLl ]] 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aLl 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aLl 00:11:34.454 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aLl 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.T5T 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.T5T 00:11:34.711 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.T5T 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hvJ ]] 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hvJ 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hvJ 00:11:34.970 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hvJ 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tZ1 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.tZ1 00:11:35.229 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.tZ1 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1m4 ]] 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1m4 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1m4 00:11:35.488 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1m4 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uN6 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uN6 00:11:35.746 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uN6 00:11:36.005 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:36.005 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:36.005 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.005 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.005 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.005 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.264 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.523 00:11:36.523 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.523 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.523 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.782 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.041 { 00:11:37.041 "cntlid": 1, 00:11:37.041 "qid": 0, 00:11:37.041 "state": "enabled", 00:11:37.041 "thread": "nvmf_tgt_poll_group_000", 00:11:37.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:37.041 "listen_address": { 00:11:37.041 "trtype": "TCP", 00:11:37.041 "adrfam": "IPv4", 00:11:37.041 "traddr": "10.0.0.3", 00:11:37.041 "trsvcid": "4420" 00:11:37.041 }, 00:11:37.041 "peer_address": { 00:11:37.041 "trtype": "TCP", 00:11:37.041 "adrfam": "IPv4", 00:11:37.041 "traddr": "10.0.0.1", 00:11:37.041 "trsvcid": "51144" 00:11:37.041 }, 00:11:37.041 "auth": { 00:11:37.041 "state": "completed", 00:11:37.041 "digest": "sha256", 00:11:37.041 "dhgroup": "null" 00:11:37.041 } 00:11:37.041 } 00:11:37.041 ]' 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.041 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.300 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:11:37.300 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:11:41.489 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:41.490 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.748 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.006 00:11:42.006 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.006 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.006 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.264 { 00:11:42.264 "cntlid": 3, 00:11:42.264 "qid": 0, 00:11:42.264 "state": "enabled", 00:11:42.264 "thread": "nvmf_tgt_poll_group_000", 00:11:42.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:42.264 "listen_address": { 00:11:42.264 "trtype": "TCP", 00:11:42.264 "adrfam": "IPv4", 00:11:42.264 "traddr": "10.0.0.3", 00:11:42.264 "trsvcid": "4420" 00:11:42.264 }, 00:11:42.264 "peer_address": { 00:11:42.264 "trtype": "TCP", 00:11:42.264 "adrfam": "IPv4", 00:11:42.264 "traddr": "10.0.0.1", 00:11:42.264 "trsvcid": "45448" 00:11:42.264 }, 00:11:42.264 "auth": { 00:11:42.264 "state": "completed", 00:11:42.264 "digest": "sha256", 00:11:42.264 "dhgroup": "null" 00:11:42.264 } 00:11:42.264 } 00:11:42.264 ]' 00:11:42.264 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.523 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.781 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:11:42.781 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.714 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.281 00:11:44.281 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.281 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.281 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.539 { 00:11:44.539 "cntlid": 5, 00:11:44.539 "qid": 0, 00:11:44.539 "state": "enabled", 00:11:44.539 "thread": "nvmf_tgt_poll_group_000", 00:11:44.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:44.539 "listen_address": { 00:11:44.539 "trtype": "TCP", 00:11:44.539 "adrfam": "IPv4", 00:11:44.539 "traddr": "10.0.0.3", 00:11:44.539 "trsvcid": "4420" 00:11:44.539 }, 00:11:44.539 "peer_address": { 00:11:44.539 "trtype": "TCP", 00:11:44.539 "adrfam": "IPv4", 00:11:44.539 "traddr": "10.0.0.1", 00:11:44.539 "trsvcid": "45464" 00:11:44.539 }, 00:11:44.539 "auth": { 00:11:44.539 "state": "completed", 00:11:44.539 "digest": "sha256", 00:11:44.539 "dhgroup": "null" 00:11:44.539 } 00:11:44.539 } 00:11:44.539 ]' 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.539 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.797 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:11:44.797 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.363 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.622 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.880 00:11:45.880 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.880 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.880 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.467 { 00:11:46.467 "cntlid": 7, 00:11:46.467 "qid": 0, 00:11:46.467 "state": "enabled", 00:11:46.467 "thread": "nvmf_tgt_poll_group_000", 00:11:46.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:46.467 "listen_address": { 00:11:46.467 "trtype": "TCP", 00:11:46.467 "adrfam": "IPv4", 00:11:46.467 "traddr": "10.0.0.3", 00:11:46.467 "trsvcid": "4420" 00:11:46.467 }, 00:11:46.467 "peer_address": { 00:11:46.467 "trtype": "TCP", 00:11:46.467 "adrfam": "IPv4", 00:11:46.467 "traddr": "10.0.0.1", 00:11:46.467 "trsvcid": "45496" 00:11:46.467 }, 00:11:46.467 "auth": { 00:11:46.467 "state": "completed", 00:11:46.467 "digest": "sha256", 00:11:46.467 "dhgroup": "null" 00:11:46.467 } 00:11:46.467 } 00:11:46.467 ]' 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.467 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.730 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:11:46.730 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.297 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.555 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.814 00:11:48.072 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.072 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.072 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.330 { 00:11:48.330 "cntlid": 9, 00:11:48.330 "qid": 0, 00:11:48.330 "state": "enabled", 00:11:48.330 "thread": "nvmf_tgt_poll_group_000", 00:11:48.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:48.330 "listen_address": { 00:11:48.330 "trtype": "TCP", 00:11:48.330 "adrfam": "IPv4", 00:11:48.330 "traddr": "10.0.0.3", 00:11:48.330 "trsvcid": "4420" 00:11:48.330 }, 00:11:48.330 "peer_address": { 00:11:48.330 "trtype": "TCP", 00:11:48.330 "adrfam": "IPv4", 00:11:48.330 "traddr": "10.0.0.1", 00:11:48.330 "trsvcid": "45520" 00:11:48.330 }, 00:11:48.330 "auth": { 00:11:48.330 "state": "completed", 00:11:48.330 "digest": "sha256", 00:11:48.330 "dhgroup": "ffdhe2048" 00:11:48.330 } 00:11:48.330 } 00:11:48.330 ]' 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.330 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.588 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:11:48.589 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:11:49.154 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.154 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:49.155 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.155 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.413 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.413 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.413 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.413 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.671 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.929 00:11:49.929 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.929 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.929 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.186 { 00:11:50.186 "cntlid": 11, 00:11:50.186 "qid": 0, 00:11:50.186 "state": "enabled", 00:11:50.186 "thread": "nvmf_tgt_poll_group_000", 00:11:50.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:50.186 "listen_address": { 00:11:50.186 "trtype": "TCP", 00:11:50.186 "adrfam": "IPv4", 00:11:50.186 "traddr": "10.0.0.3", 00:11:50.186 "trsvcid": "4420" 00:11:50.186 }, 00:11:50.186 "peer_address": { 00:11:50.186 "trtype": "TCP", 00:11:50.186 "adrfam": "IPv4", 00:11:50.186 "traddr": "10.0.0.1", 00:11:50.186 "trsvcid": "45526" 00:11:50.186 }, 00:11:50.186 "auth": { 00:11:50.186 "state": "completed", 00:11:50.186 "digest": "sha256", 00:11:50.186 "dhgroup": "ffdhe2048" 00:11:50.186 } 00:11:50.186 } 00:11:50.186 ]' 00:11:50.186 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.444 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.702 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:11:50.702 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.636 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.202 00:11:52.202 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.202 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.202 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.460 { 00:11:52.460 "cntlid": 13, 00:11:52.460 "qid": 0, 00:11:52.460 "state": "enabled", 00:11:52.460 "thread": "nvmf_tgt_poll_group_000", 00:11:52.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:52.460 "listen_address": { 00:11:52.460 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.3", 00:11:52.460 "trsvcid": "4420" 00:11:52.460 }, 00:11:52.460 "peer_address": { 00:11:52.460 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.1", 00:11:52.460 "trsvcid": "38516" 00:11:52.460 }, 00:11:52.460 "auth": { 00:11:52.460 "state": "completed", 00:11:52.460 "digest": "sha256", 00:11:52.460 "dhgroup": "ffdhe2048" 00:11:52.460 } 00:11:52.460 } 00:11:52.460 ]' 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.460 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.718 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:11:52.718 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:11:53.284 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.285 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.851 10:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.108 00:11:54.108 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.108 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.108 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.366 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.366 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.366 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.366 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.366 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.366 { 00:11:54.366 "cntlid": 15, 00:11:54.366 "qid": 0, 00:11:54.366 "state": "enabled", 00:11:54.366 "thread": "nvmf_tgt_poll_group_000", 00:11:54.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:54.366 "listen_address": { 00:11:54.367 "trtype": "TCP", 00:11:54.367 "adrfam": "IPv4", 00:11:54.367 "traddr": "10.0.0.3", 00:11:54.367 "trsvcid": "4420" 00:11:54.367 }, 00:11:54.367 "peer_address": { 00:11:54.367 "trtype": "TCP", 00:11:54.367 "adrfam": "IPv4", 00:11:54.367 "traddr": "10.0.0.1", 00:11:54.367 "trsvcid": "38546" 00:11:54.367 }, 00:11:54.367 "auth": { 00:11:54.367 "state": "completed", 00:11:54.367 "digest": "sha256", 00:11:54.367 "dhgroup": "ffdhe2048" 00:11:54.367 } 00:11:54.367 } 00:11:54.367 ]' 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.367 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.625 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:11:54.625 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:11:55.190 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:55.449 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.707 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.965 00:11:55.965 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.965 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.965 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.224 { 00:11:56.224 "cntlid": 17, 00:11:56.224 "qid": 0, 00:11:56.224 "state": "enabled", 00:11:56.224 "thread": "nvmf_tgt_poll_group_000", 00:11:56.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:56.224 "listen_address": { 00:11:56.224 "trtype": "TCP", 00:11:56.224 "adrfam": "IPv4", 00:11:56.224 "traddr": "10.0.0.3", 00:11:56.224 "trsvcid": "4420" 00:11:56.224 }, 00:11:56.224 "peer_address": { 00:11:56.224 "trtype": "TCP", 00:11:56.224 "adrfam": "IPv4", 00:11:56.224 "traddr": "10.0.0.1", 00:11:56.224 "trsvcid": "38566" 00:11:56.224 }, 00:11:56.224 "auth": { 00:11:56.224 "state": "completed", 00:11:56.224 "digest": "sha256", 00:11:56.224 "dhgroup": "ffdhe3072" 00:11:56.224 } 00:11:56.224 } 00:11:56.224 ]' 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.224 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.482 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.482 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.482 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.482 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.482 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.740 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:11:56.740 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.306 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.564 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.130 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.131 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.388 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.389 { 00:11:58.389 "cntlid": 19, 00:11:58.389 "qid": 0, 00:11:58.389 "state": "enabled", 00:11:58.389 "thread": "nvmf_tgt_poll_group_000", 00:11:58.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:11:58.389 "listen_address": { 00:11:58.389 "trtype": "TCP", 00:11:58.389 "adrfam": "IPv4", 00:11:58.389 "traddr": "10.0.0.3", 00:11:58.389 "trsvcid": "4420" 00:11:58.389 }, 00:11:58.389 "peer_address": { 00:11:58.389 "trtype": "TCP", 00:11:58.389 "adrfam": "IPv4", 00:11:58.389 "traddr": "10.0.0.1", 00:11:58.389 "trsvcid": "38590" 00:11:58.389 }, 00:11:58.389 "auth": { 00:11:58.389 "state": "completed", 00:11:58.389 "digest": "sha256", 00:11:58.389 "dhgroup": "ffdhe3072" 00:11:58.389 } 00:11:58.389 } 00:11:58.389 ]' 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.389 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.647 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:11:58.647 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.213 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.471 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:59.471 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.471 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.472 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.735 00:12:00.032 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.032 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.032 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.291 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.291 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.291 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.291 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.291 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.291 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.291 { 00:12:00.291 "cntlid": 21, 00:12:00.291 "qid": 0, 00:12:00.291 "state": "enabled", 00:12:00.291 "thread": "nvmf_tgt_poll_group_000", 00:12:00.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:00.291 "listen_address": { 00:12:00.291 "trtype": "TCP", 00:12:00.291 "adrfam": "IPv4", 00:12:00.291 "traddr": "10.0.0.3", 00:12:00.291 "trsvcid": "4420" 00:12:00.291 }, 00:12:00.292 "peer_address": { 00:12:00.292 "trtype": "TCP", 00:12:00.292 "adrfam": "IPv4", 00:12:00.292 "traddr": "10.0.0.1", 00:12:00.292 "trsvcid": "38612" 00:12:00.292 }, 00:12:00.292 "auth": { 00:12:00.292 "state": "completed", 00:12:00.292 "digest": "sha256", 00:12:00.292 "dhgroup": "ffdhe3072" 00:12:00.292 } 00:12:00.292 } 00:12:00.292 ]' 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.292 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.550 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:00.550 10:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.484 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.050 00:12:02.050 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.050 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.050 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.308 { 00:12:02.308 "cntlid": 23, 00:12:02.308 "qid": 0, 00:12:02.308 "state": "enabled", 00:12:02.308 "thread": "nvmf_tgt_poll_group_000", 00:12:02.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:02.308 "listen_address": { 00:12:02.308 "trtype": "TCP", 00:12:02.308 "adrfam": "IPv4", 00:12:02.308 "traddr": "10.0.0.3", 00:12:02.308 "trsvcid": "4420" 00:12:02.308 }, 00:12:02.308 "peer_address": { 00:12:02.308 "trtype": "TCP", 00:12:02.308 "adrfam": "IPv4", 00:12:02.308 "traddr": "10.0.0.1", 00:12:02.308 "trsvcid": "54088" 00:12:02.308 }, 00:12:02.308 "auth": { 00:12:02.308 "state": "completed", 00:12:02.308 "digest": "sha256", 00:12:02.308 "dhgroup": "ffdhe3072" 00:12:02.308 } 00:12:02.308 } 00:12:02.308 ]' 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.308 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.566 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:02.566 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.500 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.066 00:12:04.066 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.066 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.066 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.325 { 00:12:04.325 "cntlid": 25, 00:12:04.325 "qid": 0, 00:12:04.325 "state": "enabled", 00:12:04.325 "thread": "nvmf_tgt_poll_group_000", 00:12:04.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:04.325 "listen_address": { 00:12:04.325 "trtype": "TCP", 00:12:04.325 "adrfam": "IPv4", 00:12:04.325 "traddr": "10.0.0.3", 00:12:04.325 "trsvcid": "4420" 00:12:04.325 }, 00:12:04.325 "peer_address": { 00:12:04.325 "trtype": "TCP", 00:12:04.325 "adrfam": "IPv4", 00:12:04.325 "traddr": "10.0.0.1", 00:12:04.325 "trsvcid": "54116" 00:12:04.325 }, 00:12:04.325 "auth": { 00:12:04.325 "state": "completed", 00:12:04.325 "digest": "sha256", 00:12:04.325 "dhgroup": "ffdhe4096" 00:12:04.325 } 00:12:04.325 } 00:12:04.325 ]' 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.325 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.583 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:04.583 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.519 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.086 00:12:06.086 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.086 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.086 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.345 { 00:12:06.345 "cntlid": 27, 00:12:06.345 "qid": 0, 00:12:06.345 "state": "enabled", 00:12:06.345 "thread": "nvmf_tgt_poll_group_000", 00:12:06.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:06.345 "listen_address": { 00:12:06.345 "trtype": "TCP", 00:12:06.345 "adrfam": "IPv4", 00:12:06.345 "traddr": "10.0.0.3", 00:12:06.345 "trsvcid": "4420" 00:12:06.345 }, 00:12:06.345 "peer_address": { 00:12:06.345 "trtype": "TCP", 00:12:06.345 "adrfam": "IPv4", 00:12:06.345 "traddr": "10.0.0.1", 00:12:06.345 "trsvcid": "54138" 00:12:06.345 }, 00:12:06.345 "auth": { 00:12:06.345 "state": "completed", 00:12:06.345 "digest": "sha256", 00:12:06.345 "dhgroup": "ffdhe4096" 00:12:06.345 } 00:12:06.345 } 00:12:06.345 ]' 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.345 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.603 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:06.603 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:07.538 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.538 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:07.538 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.538 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.539 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.106 00:12:08.106 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.106 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.106 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.364 { 00:12:08.364 "cntlid": 29, 00:12:08.364 "qid": 0, 00:12:08.364 "state": "enabled", 00:12:08.364 "thread": "nvmf_tgt_poll_group_000", 00:12:08.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:08.364 "listen_address": { 00:12:08.364 "trtype": "TCP", 00:12:08.364 "adrfam": "IPv4", 00:12:08.364 "traddr": "10.0.0.3", 00:12:08.364 "trsvcid": "4420" 00:12:08.364 }, 00:12:08.364 "peer_address": { 00:12:08.364 "trtype": "TCP", 00:12:08.364 "adrfam": "IPv4", 00:12:08.364 "traddr": "10.0.0.1", 00:12:08.364 "trsvcid": "54162" 00:12:08.364 }, 00:12:08.364 "auth": { 00:12:08.364 "state": "completed", 00:12:08.364 "digest": "sha256", 00:12:08.364 "dhgroup": "ffdhe4096" 00:12:08.364 } 00:12:08.364 } 00:12:08.364 ]' 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.364 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.931 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:08.931 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:09.497 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.755 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.017 00:12:10.017 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.017 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.017 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.276 { 00:12:10.276 "cntlid": 31, 00:12:10.276 "qid": 0, 00:12:10.276 "state": "enabled", 00:12:10.276 "thread": "nvmf_tgt_poll_group_000", 00:12:10.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:10.276 "listen_address": { 00:12:10.276 "trtype": "TCP", 00:12:10.276 "adrfam": "IPv4", 00:12:10.276 "traddr": "10.0.0.3", 00:12:10.276 "trsvcid": "4420" 00:12:10.276 }, 00:12:10.276 "peer_address": { 00:12:10.276 "trtype": "TCP", 00:12:10.276 "adrfam": "IPv4", 00:12:10.276 "traddr": "10.0.0.1", 00:12:10.276 "trsvcid": "54198" 00:12:10.276 }, 00:12:10.276 "auth": { 00:12:10.276 "state": "completed", 00:12:10.276 "digest": "sha256", 00:12:10.276 "dhgroup": "ffdhe4096" 00:12:10.276 } 00:12:10.276 } 00:12:10.276 ]' 00:12:10.276 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.534 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.793 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:10.793 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:11.360 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.618 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.186 00:12:12.186 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.186 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.186 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.452 { 00:12:12.452 "cntlid": 33, 00:12:12.452 "qid": 0, 00:12:12.452 "state": "enabled", 00:12:12.452 "thread": "nvmf_tgt_poll_group_000", 00:12:12.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:12.452 "listen_address": { 00:12:12.452 "trtype": "TCP", 00:12:12.452 "adrfam": "IPv4", 00:12:12.452 "traddr": "10.0.0.3", 00:12:12.452 "trsvcid": "4420" 00:12:12.452 }, 00:12:12.452 "peer_address": { 00:12:12.452 "trtype": "TCP", 00:12:12.452 "adrfam": "IPv4", 00:12:12.452 "traddr": "10.0.0.1", 00:12:12.452 "trsvcid": "48338" 00:12:12.452 }, 00:12:12.452 "auth": { 00:12:12.452 "state": "completed", 00:12:12.452 "digest": "sha256", 00:12:12.452 "dhgroup": "ffdhe6144" 00:12:12.452 } 00:12:12.452 } 00:12:12.452 ]' 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.452 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.731 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.731 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.731 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.731 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.731 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.989 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:12.989 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:13.555 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.555 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:13.555 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.555 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.814 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.814 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.814 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.073 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.332 00:12:14.332 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.332 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.332 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.591 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.591 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.591 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.591 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.591 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.591 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.591 { 00:12:14.591 "cntlid": 35, 00:12:14.591 "qid": 0, 00:12:14.591 "state": "enabled", 00:12:14.591 "thread": "nvmf_tgt_poll_group_000", 00:12:14.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:14.591 "listen_address": { 00:12:14.591 "trtype": "TCP", 00:12:14.591 "adrfam": "IPv4", 00:12:14.591 "traddr": "10.0.0.3", 00:12:14.591 "trsvcid": "4420" 00:12:14.591 }, 00:12:14.591 "peer_address": { 00:12:14.591 "trtype": "TCP", 00:12:14.591 "adrfam": "IPv4", 00:12:14.591 "traddr": "10.0.0.1", 00:12:14.591 "trsvcid": "48360" 00:12:14.591 }, 00:12:14.591 "auth": { 00:12:14.591 "state": "completed", 00:12:14.591 "digest": "sha256", 00:12:14.591 "dhgroup": "ffdhe6144" 00:12:14.591 } 00:12:14.591 } 00:12:14.591 ]' 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.850 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.108 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:15.108 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.676 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.938 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.939 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.513 00:12:16.513 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.513 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.513 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.771 { 00:12:16.771 "cntlid": 37, 00:12:16.771 "qid": 0, 00:12:16.771 "state": "enabled", 00:12:16.771 "thread": "nvmf_tgt_poll_group_000", 00:12:16.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:16.771 "listen_address": { 00:12:16.771 "trtype": "TCP", 00:12:16.771 "adrfam": "IPv4", 00:12:16.771 "traddr": "10.0.0.3", 00:12:16.771 "trsvcid": "4420" 00:12:16.771 }, 00:12:16.771 "peer_address": { 00:12:16.771 "trtype": "TCP", 00:12:16.771 "adrfam": "IPv4", 00:12:16.771 "traddr": "10.0.0.1", 00:12:16.771 "trsvcid": "48388" 00:12:16.771 }, 00:12:16.771 "auth": { 00:12:16.771 "state": "completed", 00:12:16.771 "digest": "sha256", 00:12:16.771 "dhgroup": "ffdhe6144" 00:12:16.771 } 00:12:16.771 } 00:12:16.771 ]' 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.771 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.313 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:17.313 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:17.880 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.139 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.706 00:12:18.706 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.706 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.706 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.965 { 00:12:18.965 "cntlid": 39, 00:12:18.965 "qid": 0, 00:12:18.965 "state": "enabled", 00:12:18.965 "thread": "nvmf_tgt_poll_group_000", 00:12:18.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:18.965 "listen_address": { 00:12:18.965 "trtype": "TCP", 00:12:18.965 "adrfam": "IPv4", 00:12:18.965 "traddr": "10.0.0.3", 00:12:18.965 "trsvcid": "4420" 00:12:18.965 }, 00:12:18.965 "peer_address": { 00:12:18.965 "trtype": "TCP", 00:12:18.965 "adrfam": "IPv4", 00:12:18.965 "traddr": "10.0.0.1", 00:12:18.965 "trsvcid": "48416" 00:12:18.965 }, 00:12:18.965 "auth": { 00:12:18.965 "state": "completed", 00:12:18.965 "digest": "sha256", 00:12:18.965 "dhgroup": "ffdhe6144" 00:12:18.965 } 00:12:18.965 } 00:12:18.965 ]' 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.965 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.223 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.223 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.223 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.223 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.223 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.482 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:19.482 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:20.052 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.312 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.879 00:12:20.879 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.879 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.879 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.447 { 00:12:21.447 "cntlid": 41, 00:12:21.447 "qid": 0, 00:12:21.447 "state": "enabled", 00:12:21.447 "thread": "nvmf_tgt_poll_group_000", 00:12:21.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:21.447 "listen_address": { 00:12:21.447 "trtype": "TCP", 00:12:21.447 "adrfam": "IPv4", 00:12:21.447 "traddr": "10.0.0.3", 00:12:21.447 "trsvcid": "4420" 00:12:21.447 }, 00:12:21.447 "peer_address": { 00:12:21.447 "trtype": "TCP", 00:12:21.447 "adrfam": "IPv4", 00:12:21.447 "traddr": "10.0.0.1", 00:12:21.447 "trsvcid": "50470" 00:12:21.447 }, 00:12:21.447 "auth": { 00:12:21.447 "state": "completed", 00:12:21.447 "digest": "sha256", 00:12:21.447 "dhgroup": "ffdhe8192" 00:12:21.447 } 00:12:21.447 } 00:12:21.447 ]' 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.447 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.705 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:21.705 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:22.272 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.530 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.466 00:12:23.466 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.466 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.466 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.724 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.724 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.724 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.724 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.724 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.724 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.724 { 00:12:23.724 "cntlid": 43, 00:12:23.724 "qid": 0, 00:12:23.724 "state": "enabled", 00:12:23.724 "thread": "nvmf_tgt_poll_group_000", 00:12:23.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:23.724 "listen_address": { 00:12:23.724 "trtype": "TCP", 00:12:23.724 "adrfam": "IPv4", 00:12:23.724 "traddr": "10.0.0.3", 00:12:23.724 "trsvcid": "4420" 00:12:23.724 }, 00:12:23.724 "peer_address": { 00:12:23.724 "trtype": "TCP", 00:12:23.724 "adrfam": "IPv4", 00:12:23.724 "traddr": "10.0.0.1", 00:12:23.724 "trsvcid": "50502" 00:12:23.724 }, 00:12:23.724 "auth": { 00:12:23.724 "state": "completed", 00:12:23.724 "digest": "sha256", 00:12:23.724 "dhgroup": "ffdhe8192" 00:12:23.724 } 00:12:23.724 } 00:12:23.724 ]' 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.725 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.983 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:23.983 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.919 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.178 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.749 00:12:25.749 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.749 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.749 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.008 { 00:12:26.008 "cntlid": 45, 00:12:26.008 "qid": 0, 00:12:26.008 "state": "enabled", 00:12:26.008 "thread": "nvmf_tgt_poll_group_000", 00:12:26.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:26.008 "listen_address": { 00:12:26.008 "trtype": "TCP", 00:12:26.008 "adrfam": "IPv4", 00:12:26.008 "traddr": "10.0.0.3", 00:12:26.008 "trsvcid": "4420" 00:12:26.008 }, 00:12:26.008 "peer_address": { 00:12:26.008 "trtype": "TCP", 00:12:26.008 "adrfam": "IPv4", 00:12:26.008 "traddr": "10.0.0.1", 00:12:26.008 "trsvcid": "50530" 00:12:26.008 }, 00:12:26.008 "auth": { 00:12:26.008 "state": "completed", 00:12:26.008 "digest": "sha256", 00:12:26.008 "dhgroup": "ffdhe8192" 00:12:26.008 } 00:12:26.008 } 00:12:26.008 ]' 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.008 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.267 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.267 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.267 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.526 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:26.526 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.094 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.662 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.229 00:12:28.230 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.230 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.230 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.488 { 00:12:28.488 "cntlid": 47, 00:12:28.488 "qid": 0, 00:12:28.488 "state": "enabled", 00:12:28.488 "thread": "nvmf_tgt_poll_group_000", 00:12:28.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:28.488 "listen_address": { 00:12:28.488 "trtype": "TCP", 00:12:28.488 "adrfam": "IPv4", 00:12:28.488 "traddr": "10.0.0.3", 00:12:28.488 "trsvcid": "4420" 00:12:28.488 }, 00:12:28.488 "peer_address": { 00:12:28.488 "trtype": "TCP", 00:12:28.488 "adrfam": "IPv4", 00:12:28.488 "traddr": "10.0.0.1", 00:12:28.488 "trsvcid": "50550" 00:12:28.488 }, 00:12:28.488 "auth": { 00:12:28.488 "state": "completed", 00:12:28.488 "digest": "sha256", 00:12:28.488 "dhgroup": "ffdhe8192" 00:12:28.488 } 00:12:28.488 } 00:12:28.488 ]' 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.488 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.056 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:29.057 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:29.624 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.883 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.884 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.884 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.141 00:12:30.142 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.142 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.142 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.399 { 00:12:30.399 "cntlid": 49, 00:12:30.399 "qid": 0, 00:12:30.399 "state": "enabled", 00:12:30.399 "thread": "nvmf_tgt_poll_group_000", 00:12:30.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:30.399 "listen_address": { 00:12:30.399 "trtype": "TCP", 00:12:30.399 "adrfam": "IPv4", 00:12:30.399 "traddr": "10.0.0.3", 00:12:30.399 "trsvcid": "4420" 00:12:30.399 }, 00:12:30.399 "peer_address": { 00:12:30.399 "trtype": "TCP", 00:12:30.399 "adrfam": "IPv4", 00:12:30.399 "traddr": "10.0.0.1", 00:12:30.399 "trsvcid": "50572" 00:12:30.399 }, 00:12:30.399 "auth": { 00:12:30.399 "state": "completed", 00:12:30.399 "digest": "sha384", 00:12:30.399 "dhgroup": "null" 00:12:30.399 } 00:12:30.399 } 00:12:30.399 ]' 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.399 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.657 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:30.657 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.657 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.657 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.657 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.916 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:30.916 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:31.484 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.743 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.002 00:12:32.002 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.002 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.002 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.261 { 00:12:32.261 "cntlid": 51, 00:12:32.261 "qid": 0, 00:12:32.261 "state": "enabled", 00:12:32.261 "thread": "nvmf_tgt_poll_group_000", 00:12:32.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:32.261 "listen_address": { 00:12:32.261 "trtype": "TCP", 00:12:32.261 "adrfam": "IPv4", 00:12:32.261 "traddr": "10.0.0.3", 00:12:32.261 "trsvcid": "4420" 00:12:32.261 }, 00:12:32.261 "peer_address": { 00:12:32.261 "trtype": "TCP", 00:12:32.261 "adrfam": "IPv4", 00:12:32.261 "traddr": "10.0.0.1", 00:12:32.261 "trsvcid": "55038" 00:12:32.261 }, 00:12:32.261 "auth": { 00:12:32.261 "state": "completed", 00:12:32.261 "digest": "sha384", 00:12:32.261 "dhgroup": "null" 00:12:32.261 } 00:12:32.261 } 00:12:32.261 ]' 00:12:32.261 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.520 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.778 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:32.778 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.345 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.604 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.862 00:12:33.862 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.862 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.862 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.121 { 00:12:34.121 "cntlid": 53, 00:12:34.121 "qid": 0, 00:12:34.121 "state": "enabled", 00:12:34.121 "thread": "nvmf_tgt_poll_group_000", 00:12:34.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:34.121 "listen_address": { 00:12:34.121 "trtype": "TCP", 00:12:34.121 "adrfam": "IPv4", 00:12:34.121 "traddr": "10.0.0.3", 00:12:34.121 "trsvcid": "4420" 00:12:34.121 }, 00:12:34.121 "peer_address": { 00:12:34.121 "trtype": "TCP", 00:12:34.121 "adrfam": "IPv4", 00:12:34.121 "traddr": "10.0.0.1", 00:12:34.121 "trsvcid": "55068" 00:12:34.121 }, 00:12:34.121 "auth": { 00:12:34.121 "state": "completed", 00:12:34.121 "digest": "sha384", 00:12:34.121 "dhgroup": "null" 00:12:34.121 } 00:12:34.121 } 00:12:34.121 ]' 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.121 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.380 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:34.380 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.380 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.380 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.380 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.639 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:34.639 10:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.207 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.466 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.724 00:12:35.725 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.725 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.725 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.994 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.994 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.994 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.994 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.994 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.994 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.994 { 00:12:35.994 "cntlid": 55, 00:12:35.994 "qid": 0, 00:12:35.994 "state": "enabled", 00:12:35.994 "thread": "nvmf_tgt_poll_group_000", 00:12:35.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:35.994 "listen_address": { 00:12:35.994 "trtype": "TCP", 00:12:35.994 "adrfam": "IPv4", 00:12:35.994 "traddr": "10.0.0.3", 00:12:35.994 "trsvcid": "4420" 00:12:35.994 }, 00:12:35.994 "peer_address": { 00:12:35.994 "trtype": "TCP", 00:12:35.994 "adrfam": "IPv4", 00:12:35.995 "traddr": "10.0.0.1", 00:12:35.995 "trsvcid": "55100" 00:12:35.995 }, 00:12:35.995 "auth": { 00:12:35.995 "state": "completed", 00:12:35.995 "digest": "sha384", 00:12:35.995 "dhgroup": "null" 00:12:35.995 } 00:12:35.995 } 00:12:35.995 ]' 00:12:35.995 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.995 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.995 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.255 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:36.255 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.255 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.255 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.255 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.514 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:36.514 10:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:37.082 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.341 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.910 00:12:37.910 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.910 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.910 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.169 { 00:12:38.169 "cntlid": 57, 00:12:38.169 "qid": 0, 00:12:38.169 "state": "enabled", 00:12:38.169 "thread": "nvmf_tgt_poll_group_000", 00:12:38.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:38.169 "listen_address": { 00:12:38.169 "trtype": "TCP", 00:12:38.169 "adrfam": "IPv4", 00:12:38.169 "traddr": "10.0.0.3", 00:12:38.169 "trsvcid": "4420" 00:12:38.169 }, 00:12:38.169 "peer_address": { 00:12:38.169 "trtype": "TCP", 00:12:38.169 "adrfam": "IPv4", 00:12:38.169 "traddr": "10.0.0.1", 00:12:38.169 "trsvcid": "55130" 00:12:38.169 }, 00:12:38.169 "auth": { 00:12:38.169 "state": "completed", 00:12:38.169 "digest": "sha384", 00:12:38.169 "dhgroup": "ffdhe2048" 00:12:38.169 } 00:12:38.169 } 00:12:38.169 ]' 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.169 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.736 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:38.736 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.342 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.601 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.859 00:12:39.859 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.859 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.859 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.121 { 00:12:40.121 "cntlid": 59, 00:12:40.121 "qid": 0, 00:12:40.121 "state": "enabled", 00:12:40.121 "thread": "nvmf_tgt_poll_group_000", 00:12:40.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:40.121 "listen_address": { 00:12:40.121 "trtype": "TCP", 00:12:40.121 "adrfam": "IPv4", 00:12:40.121 "traddr": "10.0.0.3", 00:12:40.121 "trsvcid": "4420" 00:12:40.121 }, 00:12:40.121 "peer_address": { 00:12:40.121 "trtype": "TCP", 00:12:40.121 "adrfam": "IPv4", 00:12:40.121 "traddr": "10.0.0.1", 00:12:40.121 "trsvcid": "55164" 00:12:40.121 }, 00:12:40.121 "auth": { 00:12:40.121 "state": "completed", 00:12:40.121 "digest": "sha384", 00:12:40.121 "dhgroup": "ffdhe2048" 00:12:40.121 } 00:12:40.121 } 00:12:40.121 ]' 00:12:40.121 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.380 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.639 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:40.639 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.206 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.465 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.723 00:12:41.982 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.982 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.982 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.240 { 00:12:42.240 "cntlid": 61, 00:12:42.240 "qid": 0, 00:12:42.240 "state": "enabled", 00:12:42.240 "thread": "nvmf_tgt_poll_group_000", 00:12:42.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:42.240 "listen_address": { 00:12:42.240 "trtype": "TCP", 00:12:42.240 "adrfam": "IPv4", 00:12:42.240 "traddr": "10.0.0.3", 00:12:42.240 "trsvcid": "4420" 00:12:42.240 }, 00:12:42.240 "peer_address": { 00:12:42.240 "trtype": "TCP", 00:12:42.240 "adrfam": "IPv4", 00:12:42.240 "traddr": "10.0.0.1", 00:12:42.240 "trsvcid": "33664" 00:12:42.240 }, 00:12:42.240 "auth": { 00:12:42.240 "state": "completed", 00:12:42.240 "digest": "sha384", 00:12:42.240 "dhgroup": "ffdhe2048" 00:12:42.240 } 00:12:42.240 } 00:12:42.240 ]' 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.240 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.499 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:42.499 10:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.434 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.693 00:12:43.693 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.693 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.693 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.260 { 00:12:44.260 "cntlid": 63, 00:12:44.260 "qid": 0, 00:12:44.260 "state": "enabled", 00:12:44.260 "thread": "nvmf_tgt_poll_group_000", 00:12:44.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:44.260 "listen_address": { 00:12:44.260 "trtype": "TCP", 00:12:44.260 "adrfam": "IPv4", 00:12:44.260 "traddr": "10.0.0.3", 00:12:44.260 "trsvcid": "4420" 00:12:44.260 }, 00:12:44.260 "peer_address": { 00:12:44.260 "trtype": "TCP", 00:12:44.260 "adrfam": "IPv4", 00:12:44.260 "traddr": "10.0.0.1", 00:12:44.260 "trsvcid": "33696" 00:12:44.260 }, 00:12:44.260 "auth": { 00:12:44.260 "state": "completed", 00:12:44.260 "digest": "sha384", 00:12:44.260 "dhgroup": "ffdhe2048" 00:12:44.260 } 00:12:44.260 } 00:12:44.260 ]' 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.260 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.519 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:44.519 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:45.086 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.086 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:45.087 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.654 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.913 00:12:45.913 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.913 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.913 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.171 { 00:12:46.171 "cntlid": 65, 00:12:46.171 "qid": 0, 00:12:46.171 "state": "enabled", 00:12:46.171 "thread": "nvmf_tgt_poll_group_000", 00:12:46.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:46.171 "listen_address": { 00:12:46.171 "trtype": "TCP", 00:12:46.171 "adrfam": "IPv4", 00:12:46.171 "traddr": "10.0.0.3", 00:12:46.171 "trsvcid": "4420" 00:12:46.171 }, 00:12:46.171 "peer_address": { 00:12:46.171 "trtype": "TCP", 00:12:46.171 "adrfam": "IPv4", 00:12:46.171 "traddr": "10.0.0.1", 00:12:46.171 "trsvcid": "33728" 00:12:46.171 }, 00:12:46.171 "auth": { 00:12:46.171 "state": "completed", 00:12:46.171 "digest": "sha384", 00:12:46.171 "dhgroup": "ffdhe3072" 00:12:46.171 } 00:12:46.171 } 00:12:46.171 ]' 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:46.171 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.429 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.429 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.429 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.687 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:46.688 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.254 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.823 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.082 00:12:48.082 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.082 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.082 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.340 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.340 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.340 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.340 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.340 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.340 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.340 { 00:12:48.340 "cntlid": 67, 00:12:48.340 "qid": 0, 00:12:48.340 "state": "enabled", 00:12:48.340 "thread": "nvmf_tgt_poll_group_000", 00:12:48.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:48.340 "listen_address": { 00:12:48.340 "trtype": "TCP", 00:12:48.340 "adrfam": "IPv4", 00:12:48.340 "traddr": "10.0.0.3", 00:12:48.340 "trsvcid": "4420" 00:12:48.340 }, 00:12:48.340 "peer_address": { 00:12:48.340 "trtype": "TCP", 00:12:48.340 "adrfam": "IPv4", 00:12:48.340 "traddr": "10.0.0.1", 00:12:48.340 "trsvcid": "33758" 00:12:48.340 }, 00:12:48.340 "auth": { 00:12:48.340 "state": "completed", 00:12:48.341 "digest": "sha384", 00:12:48.341 "dhgroup": "ffdhe3072" 00:12:48.341 } 00:12:48.341 } 00:12:48.341 ]' 00:12:48.341 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.341 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.341 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.599 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:48.599 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.599 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.599 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.599 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.858 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:48.858 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:49.426 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.685 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.944 00:12:50.202 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.202 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.202 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.483 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.483 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.483 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.483 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.483 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.483 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.483 { 00:12:50.483 "cntlid": 69, 00:12:50.483 "qid": 0, 00:12:50.483 "state": "enabled", 00:12:50.483 "thread": "nvmf_tgt_poll_group_000", 00:12:50.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:50.483 "listen_address": { 00:12:50.483 "trtype": "TCP", 00:12:50.483 "adrfam": "IPv4", 00:12:50.483 "traddr": "10.0.0.3", 00:12:50.483 "trsvcid": "4420" 00:12:50.483 }, 00:12:50.483 "peer_address": { 00:12:50.483 "trtype": "TCP", 00:12:50.483 "adrfam": "IPv4", 00:12:50.483 "traddr": "10.0.0.1", 00:12:50.483 "trsvcid": "33782" 00:12:50.483 }, 00:12:50.483 "auth": { 00:12:50.483 "state": "completed", 00:12:50.483 "digest": "sha384", 00:12:50.483 "dhgroup": "ffdhe3072" 00:12:50.483 } 00:12:50.484 } 00:12:50.484 ]' 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.484 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.761 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:50.761 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.328 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.894 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.153 00:12:52.153 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.153 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.153 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.412 { 00:12:52.412 "cntlid": 71, 00:12:52.412 "qid": 0, 00:12:52.412 "state": "enabled", 00:12:52.412 "thread": "nvmf_tgt_poll_group_000", 00:12:52.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:52.412 "listen_address": { 00:12:52.412 "trtype": "TCP", 00:12:52.412 "adrfam": "IPv4", 00:12:52.412 "traddr": "10.0.0.3", 00:12:52.412 "trsvcid": "4420" 00:12:52.412 }, 00:12:52.412 "peer_address": { 00:12:52.412 "trtype": "TCP", 00:12:52.412 "adrfam": "IPv4", 00:12:52.412 "traddr": "10.0.0.1", 00:12:52.412 "trsvcid": "43674" 00:12:52.412 }, 00:12:52.412 "auth": { 00:12:52.412 "state": "completed", 00:12:52.412 "digest": "sha384", 00:12:52.412 "dhgroup": "ffdhe3072" 00:12:52.412 } 00:12:52.412 } 00:12:52.412 ]' 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.412 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.671 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:52.671 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.671 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.671 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.671 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.930 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:52.930 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:53.498 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.757 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.325 00:12:54.325 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.325 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.325 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.583 { 00:12:54.583 "cntlid": 73, 00:12:54.583 "qid": 0, 00:12:54.583 "state": "enabled", 00:12:54.583 "thread": "nvmf_tgt_poll_group_000", 00:12:54.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:54.583 "listen_address": { 00:12:54.583 "trtype": "TCP", 00:12:54.583 "adrfam": "IPv4", 00:12:54.583 "traddr": "10.0.0.3", 00:12:54.583 "trsvcid": "4420" 00:12:54.583 }, 00:12:54.583 "peer_address": { 00:12:54.583 "trtype": "TCP", 00:12:54.583 "adrfam": "IPv4", 00:12:54.583 "traddr": "10.0.0.1", 00:12:54.583 "trsvcid": "43708" 00:12:54.583 }, 00:12:54.583 "auth": { 00:12:54.583 "state": "completed", 00:12:54.583 "digest": "sha384", 00:12:54.583 "dhgroup": "ffdhe4096" 00:12:54.583 } 00:12:54.583 } 00:12:54.583 ]' 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.583 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.842 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:54.842 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:12:55.778 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:55.779 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.037 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.296 00:12:56.296 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.296 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.296 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.555 { 00:12:56.555 "cntlid": 75, 00:12:56.555 "qid": 0, 00:12:56.555 "state": "enabled", 00:12:56.555 "thread": "nvmf_tgt_poll_group_000", 00:12:56.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:56.555 "listen_address": { 00:12:56.555 "trtype": "TCP", 00:12:56.555 "adrfam": "IPv4", 00:12:56.555 "traddr": "10.0.0.3", 00:12:56.555 "trsvcid": "4420" 00:12:56.555 }, 00:12:56.555 "peer_address": { 00:12:56.555 "trtype": "TCP", 00:12:56.555 "adrfam": "IPv4", 00:12:56.555 "traddr": "10.0.0.1", 00:12:56.555 "trsvcid": "43748" 00:12:56.555 }, 00:12:56.555 "auth": { 00:12:56.555 "state": "completed", 00:12:56.555 "digest": "sha384", 00:12:56.555 "dhgroup": "ffdhe4096" 00:12:56.555 } 00:12:56.555 } 00:12:56.555 ]' 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:56.555 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.813 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.813 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.813 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.813 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.813 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.072 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:57.072 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:57.639 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.898 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.466 00:12:58.466 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.466 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.466 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.725 { 00:12:58.725 "cntlid": 77, 00:12:58.725 "qid": 0, 00:12:58.725 "state": "enabled", 00:12:58.725 "thread": "nvmf_tgt_poll_group_000", 00:12:58.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:12:58.725 "listen_address": { 00:12:58.725 "trtype": "TCP", 00:12:58.725 "adrfam": "IPv4", 00:12:58.725 "traddr": "10.0.0.3", 00:12:58.725 "trsvcid": "4420" 00:12:58.725 }, 00:12:58.725 "peer_address": { 00:12:58.725 "trtype": "TCP", 00:12:58.725 "adrfam": "IPv4", 00:12:58.725 "traddr": "10.0.0.1", 00:12:58.725 "trsvcid": "43792" 00:12:58.725 }, 00:12:58.725 "auth": { 00:12:58.725 "state": "completed", 00:12:58.725 "digest": "sha384", 00:12:58.725 "dhgroup": "ffdhe4096" 00:12:58.725 } 00:12:58.725 } 00:12:58.725 ]' 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.725 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.984 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:58.984 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.921 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.921 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.179 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.179 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:00.179 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.179 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.437 00:13:00.437 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.437 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.437 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.695 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.695 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.695 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.695 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.695 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.695 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.695 { 00:13:00.695 "cntlid": 79, 00:13:00.695 "qid": 0, 00:13:00.695 "state": "enabled", 00:13:00.696 "thread": "nvmf_tgt_poll_group_000", 00:13:00.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:00.696 "listen_address": { 00:13:00.696 "trtype": "TCP", 00:13:00.696 "adrfam": "IPv4", 00:13:00.696 "traddr": "10.0.0.3", 00:13:00.696 "trsvcid": "4420" 00:13:00.696 }, 00:13:00.696 "peer_address": { 00:13:00.696 "trtype": "TCP", 00:13:00.696 "adrfam": "IPv4", 00:13:00.696 "traddr": "10.0.0.1", 00:13:00.696 "trsvcid": "43826" 00:13:00.696 }, 00:13:00.696 "auth": { 00:13:00.696 "state": "completed", 00:13:00.696 "digest": "sha384", 00:13:00.696 "dhgroup": "ffdhe4096" 00:13:00.696 } 00:13:00.696 } 00:13:00.696 ]' 00:13:00.696 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.696 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.696 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.955 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.955 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.955 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.955 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.955 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.214 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:01.214 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:01.780 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.039 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.618 00:13:02.618 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.618 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.618 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.889 { 00:13:02.889 "cntlid": 81, 00:13:02.889 "qid": 0, 00:13:02.889 "state": "enabled", 00:13:02.889 "thread": "nvmf_tgt_poll_group_000", 00:13:02.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:02.889 "listen_address": { 00:13:02.889 "trtype": "TCP", 00:13:02.889 "adrfam": "IPv4", 00:13:02.889 "traddr": "10.0.0.3", 00:13:02.889 "trsvcid": "4420" 00:13:02.889 }, 00:13:02.889 "peer_address": { 00:13:02.889 "trtype": "TCP", 00:13:02.889 "adrfam": "IPv4", 00:13:02.889 "traddr": "10.0.0.1", 00:13:02.889 "trsvcid": "38308" 00:13:02.889 }, 00:13:02.889 "auth": { 00:13:02.889 "state": "completed", 00:13:02.889 "digest": "sha384", 00:13:02.889 "dhgroup": "ffdhe6144" 00:13:02.889 } 00:13:02.889 } 00:13:02.889 ]' 00:13:02.889 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.889 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.889 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.889 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.889 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.148 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.148 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.148 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.406 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:03.406 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:03.973 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.232 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.799 00:13:04.799 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.799 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.799 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.058 { 00:13:05.058 "cntlid": 83, 00:13:05.058 "qid": 0, 00:13:05.058 "state": "enabled", 00:13:05.058 "thread": "nvmf_tgt_poll_group_000", 00:13:05.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:05.058 "listen_address": { 00:13:05.058 "trtype": "TCP", 00:13:05.058 "adrfam": "IPv4", 00:13:05.058 "traddr": "10.0.0.3", 00:13:05.058 "trsvcid": "4420" 00:13:05.058 }, 00:13:05.058 "peer_address": { 00:13:05.058 "trtype": "TCP", 00:13:05.058 "adrfam": "IPv4", 00:13:05.058 "traddr": "10.0.0.1", 00:13:05.058 "trsvcid": "38324" 00:13:05.058 }, 00:13:05.058 "auth": { 00:13:05.058 "state": "completed", 00:13:05.058 "digest": "sha384", 00:13:05.058 "dhgroup": "ffdhe6144" 00:13:05.058 } 00:13:05.058 } 00:13:05.058 ]' 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.058 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.626 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:05.626 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.194 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.453 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.712 00:13:06.971 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.971 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.971 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.971 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.971 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.971 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.971 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.971 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.971 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.971 { 00:13:06.971 "cntlid": 85, 00:13:06.971 "qid": 0, 00:13:06.971 "state": "enabled", 00:13:06.971 "thread": "nvmf_tgt_poll_group_000", 00:13:06.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:06.971 "listen_address": { 00:13:06.971 "trtype": "TCP", 00:13:06.971 "adrfam": "IPv4", 00:13:06.971 "traddr": "10.0.0.3", 00:13:06.971 "trsvcid": "4420" 00:13:06.971 }, 00:13:06.971 "peer_address": { 00:13:06.971 "trtype": "TCP", 00:13:06.971 "adrfam": "IPv4", 00:13:06.971 "traddr": "10.0.0.1", 00:13:06.971 "trsvcid": "38350" 00:13:06.971 }, 00:13:06.971 "auth": { 00:13:06.971 "state": "completed", 00:13:06.972 "digest": "sha384", 00:13:06.972 "dhgroup": "ffdhe6144" 00:13:06.972 } 00:13:06.972 } 00:13:06.972 ]' 00:13:06.972 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.230 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.231 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.231 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:07.231 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.231 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.231 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.231 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.490 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:07.490 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.058 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.317 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.886 00:13:08.886 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.886 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.886 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.145 { 00:13:09.145 "cntlid": 87, 00:13:09.145 "qid": 0, 00:13:09.145 "state": "enabled", 00:13:09.145 "thread": "nvmf_tgt_poll_group_000", 00:13:09.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:09.145 "listen_address": { 00:13:09.145 "trtype": "TCP", 00:13:09.145 "adrfam": "IPv4", 00:13:09.145 "traddr": "10.0.0.3", 00:13:09.145 "trsvcid": "4420" 00:13:09.145 }, 00:13:09.145 "peer_address": { 00:13:09.145 "trtype": "TCP", 00:13:09.145 "adrfam": "IPv4", 00:13:09.145 "traddr": "10.0.0.1", 00:13:09.145 "trsvcid": "38374" 00:13:09.145 }, 00:13:09.145 "auth": { 00:13:09.145 "state": "completed", 00:13:09.145 "digest": "sha384", 00:13:09.145 "dhgroup": "ffdhe6144" 00:13:09.145 } 00:13:09.145 } 00:13:09.145 ]' 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.145 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.404 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.404 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.405 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.663 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:09.663 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:10.231 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.490 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.058 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.318 { 00:13:11.318 "cntlid": 89, 00:13:11.318 "qid": 0, 00:13:11.318 "state": "enabled", 00:13:11.318 "thread": "nvmf_tgt_poll_group_000", 00:13:11.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:11.318 "listen_address": { 00:13:11.318 "trtype": "TCP", 00:13:11.318 "adrfam": "IPv4", 00:13:11.318 "traddr": "10.0.0.3", 00:13:11.318 "trsvcid": "4420" 00:13:11.318 }, 00:13:11.318 "peer_address": { 00:13:11.318 "trtype": "TCP", 00:13:11.318 "adrfam": "IPv4", 00:13:11.318 "traddr": "10.0.0.1", 00:13:11.318 "trsvcid": "34360" 00:13:11.318 }, 00:13:11.318 "auth": { 00:13:11.318 "state": "completed", 00:13:11.318 "digest": "sha384", 00:13:11.318 "dhgroup": "ffdhe8192" 00:13:11.318 } 00:13:11.318 } 00:13:11.318 ]' 00:13:11.318 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.577 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.837 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:11.837 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.774 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.343 00:13:13.343 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.343 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.343 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.911 { 00:13:13.911 "cntlid": 91, 00:13:13.911 "qid": 0, 00:13:13.911 "state": "enabled", 00:13:13.911 "thread": "nvmf_tgt_poll_group_000", 00:13:13.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:13.911 "listen_address": { 00:13:13.911 "trtype": "TCP", 00:13:13.911 "adrfam": "IPv4", 00:13:13.911 "traddr": "10.0.0.3", 00:13:13.911 "trsvcid": "4420" 00:13:13.911 }, 00:13:13.911 "peer_address": { 00:13:13.911 "trtype": "TCP", 00:13:13.911 "adrfam": "IPv4", 00:13:13.911 "traddr": "10.0.0.1", 00:13:13.911 "trsvcid": "34390" 00:13:13.911 }, 00:13:13.911 "auth": { 00:13:13.911 "state": "completed", 00:13:13.911 "digest": "sha384", 00:13:13.911 "dhgroup": "ffdhe8192" 00:13:13.911 } 00:13:13.911 } 00:13:13.911 ]' 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.911 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.911 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.911 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.911 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.911 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.911 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.486 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:14.486 10:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.053 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.312 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.881 00:13:15.881 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.881 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.881 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.449 { 00:13:16.449 "cntlid": 93, 00:13:16.449 "qid": 0, 00:13:16.449 "state": "enabled", 00:13:16.449 "thread": "nvmf_tgt_poll_group_000", 00:13:16.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:16.449 "listen_address": { 00:13:16.449 "trtype": "TCP", 00:13:16.449 "adrfam": "IPv4", 00:13:16.449 "traddr": "10.0.0.3", 00:13:16.449 "trsvcid": "4420" 00:13:16.449 }, 00:13:16.449 "peer_address": { 00:13:16.449 "trtype": "TCP", 00:13:16.449 "adrfam": "IPv4", 00:13:16.449 "traddr": "10.0.0.1", 00:13:16.449 "trsvcid": "34408" 00:13:16.449 }, 00:13:16.449 "auth": { 00:13:16.449 "state": "completed", 00:13:16.449 "digest": "sha384", 00:13:16.449 "dhgroup": "ffdhe8192" 00:13:16.449 } 00:13:16.449 } 00:13:16.449 ]' 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.449 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.709 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:16.709 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.646 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.905 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.905 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.905 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.905 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.474 00:13:18.474 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.474 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.474 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.733 { 00:13:18.733 "cntlid": 95, 00:13:18.733 "qid": 0, 00:13:18.733 "state": "enabled", 00:13:18.733 "thread": "nvmf_tgt_poll_group_000", 00:13:18.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:18.733 "listen_address": { 00:13:18.733 "trtype": "TCP", 00:13:18.733 "adrfam": "IPv4", 00:13:18.733 "traddr": "10.0.0.3", 00:13:18.733 "trsvcid": "4420" 00:13:18.733 }, 00:13:18.733 "peer_address": { 00:13:18.733 "trtype": "TCP", 00:13:18.733 "adrfam": "IPv4", 00:13:18.733 "traddr": "10.0.0.1", 00:13:18.733 "trsvcid": "34436" 00:13:18.733 }, 00:13:18.733 "auth": { 00:13:18.733 "state": "completed", 00:13:18.733 "digest": "sha384", 00:13:18.733 "dhgroup": "ffdhe8192" 00:13:18.733 } 00:13:18.733 } 00:13:18.733 ]' 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.733 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.992 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.992 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.992 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.251 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:19.251 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:19.817 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.074 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.332 00:13:20.332 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.332 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.332 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.899 { 00:13:20.899 "cntlid": 97, 00:13:20.899 "qid": 0, 00:13:20.899 "state": "enabled", 00:13:20.899 "thread": "nvmf_tgt_poll_group_000", 00:13:20.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:20.899 "listen_address": { 00:13:20.899 "trtype": "TCP", 00:13:20.899 "adrfam": "IPv4", 00:13:20.899 "traddr": "10.0.0.3", 00:13:20.899 "trsvcid": "4420" 00:13:20.899 }, 00:13:20.899 "peer_address": { 00:13:20.899 "trtype": "TCP", 00:13:20.899 "adrfam": "IPv4", 00:13:20.899 "traddr": "10.0.0.1", 00:13:20.899 "trsvcid": "34474" 00:13:20.899 }, 00:13:20.899 "auth": { 00:13:20.899 "state": "completed", 00:13:20.899 "digest": "sha512", 00:13:20.899 "dhgroup": "null" 00:13:20.899 } 00:13:20.899 } 00:13:20.899 ]' 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.899 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.899 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:20.899 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.899 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.899 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.899 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.158 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:21.158 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.093 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.093 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.094 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.094 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.094 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.660 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.660 { 00:13:22.660 "cntlid": 99, 00:13:22.660 "qid": 0, 00:13:22.660 "state": "enabled", 00:13:22.660 "thread": "nvmf_tgt_poll_group_000", 00:13:22.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:22.660 "listen_address": { 00:13:22.660 "trtype": "TCP", 00:13:22.660 "adrfam": "IPv4", 00:13:22.660 "traddr": "10.0.0.3", 00:13:22.660 "trsvcid": "4420" 00:13:22.660 }, 00:13:22.660 "peer_address": { 00:13:22.660 "trtype": "TCP", 00:13:22.660 "adrfam": "IPv4", 00:13:22.660 "traddr": "10.0.0.1", 00:13:22.660 "trsvcid": "59656" 00:13:22.660 }, 00:13:22.660 "auth": { 00:13:22.660 "state": "completed", 00:13:22.660 "digest": "sha512", 00:13:22.660 "dhgroup": "null" 00:13:22.660 } 00:13:22.660 } 00:13:22.660 ]' 00:13:22.660 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.919 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.919 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.919 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:22.919 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.919 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.919 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.919 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.178 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:23.178 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.114 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.114 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.682 00:13:24.682 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.682 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.682 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.940 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.940 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.940 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.940 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.940 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.940 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.940 { 00:13:24.940 "cntlid": 101, 00:13:24.940 "qid": 0, 00:13:24.940 "state": "enabled", 00:13:24.940 "thread": "nvmf_tgt_poll_group_000", 00:13:24.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:24.940 "listen_address": { 00:13:24.940 "trtype": "TCP", 00:13:24.940 "adrfam": "IPv4", 00:13:24.941 "traddr": "10.0.0.3", 00:13:24.941 "trsvcid": "4420" 00:13:24.941 }, 00:13:24.941 "peer_address": { 00:13:24.941 "trtype": "TCP", 00:13:24.941 "adrfam": "IPv4", 00:13:24.941 "traddr": "10.0.0.1", 00:13:24.941 "trsvcid": "59678" 00:13:24.941 }, 00:13:24.941 "auth": { 00:13:24.941 "state": "completed", 00:13:24.941 "digest": "sha512", 00:13:24.941 "dhgroup": "null" 00:13:24.941 } 00:13:24.941 } 00:13:24.941 ]' 00:13:24.941 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.941 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.199 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:25.199 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.135 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.396 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.672 00:13:26.672 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.672 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.672 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.940 { 00:13:26.940 "cntlid": 103, 00:13:26.940 "qid": 0, 00:13:26.940 "state": "enabled", 00:13:26.940 "thread": "nvmf_tgt_poll_group_000", 00:13:26.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:26.940 "listen_address": { 00:13:26.940 "trtype": "TCP", 00:13:26.940 "adrfam": "IPv4", 00:13:26.940 "traddr": "10.0.0.3", 00:13:26.940 "trsvcid": "4420" 00:13:26.940 }, 00:13:26.940 "peer_address": { 00:13:26.940 "trtype": "TCP", 00:13:26.940 "adrfam": "IPv4", 00:13:26.940 "traddr": "10.0.0.1", 00:13:26.940 "trsvcid": "59706" 00:13:26.940 }, 00:13:26.940 "auth": { 00:13:26.940 "state": "completed", 00:13:26.940 "digest": "sha512", 00:13:26.940 "dhgroup": "null" 00:13:26.940 } 00:13:26.940 } 00:13:26.940 ]' 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.940 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.941 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.941 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:26.941 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.199 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.199 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.200 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.458 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:27.458 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:28.026 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.594 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.853 00:13:28.853 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.853 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.853 10:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.112 { 00:13:29.112 "cntlid": 105, 00:13:29.112 "qid": 0, 00:13:29.112 "state": "enabled", 00:13:29.112 "thread": "nvmf_tgt_poll_group_000", 00:13:29.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:29.112 "listen_address": { 00:13:29.112 "trtype": "TCP", 00:13:29.112 "adrfam": "IPv4", 00:13:29.112 "traddr": "10.0.0.3", 00:13:29.112 "trsvcid": "4420" 00:13:29.112 }, 00:13:29.112 "peer_address": { 00:13:29.112 "trtype": "TCP", 00:13:29.112 "adrfam": "IPv4", 00:13:29.112 "traddr": "10.0.0.1", 00:13:29.112 "trsvcid": "59742" 00:13:29.112 }, 00:13:29.112 "auth": { 00:13:29.112 "state": "completed", 00:13:29.112 "digest": "sha512", 00:13:29.112 "dhgroup": "ffdhe2048" 00:13:29.112 } 00:13:29.112 } 00:13:29.112 ]' 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.112 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.371 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.371 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.371 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.371 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.371 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.631 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:29.631 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.198 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.457 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.025 00:13:31.025 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.025 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.025 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.025 { 00:13:31.025 "cntlid": 107, 00:13:31.025 "qid": 0, 00:13:31.025 "state": "enabled", 00:13:31.025 "thread": "nvmf_tgt_poll_group_000", 00:13:31.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:31.025 "listen_address": { 00:13:31.025 "trtype": "TCP", 00:13:31.025 "adrfam": "IPv4", 00:13:31.025 "traddr": "10.0.0.3", 00:13:31.025 "trsvcid": "4420" 00:13:31.025 }, 00:13:31.025 "peer_address": { 00:13:31.025 "trtype": "TCP", 00:13:31.025 "adrfam": "IPv4", 00:13:31.025 "traddr": "10.0.0.1", 00:13:31.025 "trsvcid": "36048" 00:13:31.025 }, 00:13:31.025 "auth": { 00:13:31.025 "state": "completed", 00:13:31.025 "digest": "sha512", 00:13:31.025 "dhgroup": "ffdhe2048" 00:13:31.025 } 00:13:31.025 } 00:13:31.025 ]' 00:13:31.025 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.284 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.543 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:31.543 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:32.110 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.110 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:32.110 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.110 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.110 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.110 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.111 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.111 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.370 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.629 00:13:32.629 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.629 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.629 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.888 { 00:13:32.888 "cntlid": 109, 00:13:32.888 "qid": 0, 00:13:32.888 "state": "enabled", 00:13:32.888 "thread": "nvmf_tgt_poll_group_000", 00:13:32.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:32.888 "listen_address": { 00:13:32.888 "trtype": "TCP", 00:13:32.888 "adrfam": "IPv4", 00:13:32.888 "traddr": "10.0.0.3", 00:13:32.888 "trsvcid": "4420" 00:13:32.888 }, 00:13:32.888 "peer_address": { 00:13:32.888 "trtype": "TCP", 00:13:32.888 "adrfam": "IPv4", 00:13:32.888 "traddr": "10.0.0.1", 00:13:32.888 "trsvcid": "36084" 00:13:32.888 }, 00:13:32.888 "auth": { 00:13:32.888 "state": "completed", 00:13:32.888 "digest": "sha512", 00:13:32.888 "dhgroup": "ffdhe2048" 00:13:32.888 } 00:13:32.888 } 00:13:32.888 ]' 00:13:32.888 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.147 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.407 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:33.407 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.975 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.235 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.494 00:13:34.494 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.494 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.494 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.753 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.754 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.754 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.754 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.754 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.754 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.754 { 00:13:34.754 "cntlid": 111, 00:13:34.754 "qid": 0, 00:13:34.754 "state": "enabled", 00:13:34.754 "thread": "nvmf_tgt_poll_group_000", 00:13:34.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:34.754 "listen_address": { 00:13:34.754 "trtype": "TCP", 00:13:34.754 "adrfam": "IPv4", 00:13:34.754 "traddr": "10.0.0.3", 00:13:34.754 "trsvcid": "4420" 00:13:34.754 }, 00:13:34.754 "peer_address": { 00:13:34.754 "trtype": "TCP", 00:13:34.754 "adrfam": "IPv4", 00:13:34.754 "traddr": "10.0.0.1", 00:13:34.754 "trsvcid": "36126" 00:13:34.754 }, 00:13:34.754 "auth": { 00:13:34.754 "state": "completed", 00:13:34.754 "digest": "sha512", 00:13:34.754 "dhgroup": "ffdhe2048" 00:13:34.754 } 00:13:34.754 } 00:13:34.754 ]' 00:13:34.754 10:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.012 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.271 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:35.271 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.208 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.467 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.468 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.727 00:13:36.727 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.727 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.727 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.986 { 00:13:36.986 "cntlid": 113, 00:13:36.986 "qid": 0, 00:13:36.986 "state": "enabled", 00:13:36.986 "thread": "nvmf_tgt_poll_group_000", 00:13:36.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:36.986 "listen_address": { 00:13:36.986 "trtype": "TCP", 00:13:36.986 "adrfam": "IPv4", 00:13:36.986 "traddr": "10.0.0.3", 00:13:36.986 "trsvcid": "4420" 00:13:36.986 }, 00:13:36.986 "peer_address": { 00:13:36.986 "trtype": "TCP", 00:13:36.986 "adrfam": "IPv4", 00:13:36.986 "traddr": "10.0.0.1", 00:13:36.986 "trsvcid": "36148" 00:13:36.986 }, 00:13:36.986 "auth": { 00:13:36.986 "state": "completed", 00:13:36.986 "digest": "sha512", 00:13:36.986 "dhgroup": "ffdhe3072" 00:13:36.986 } 00:13:36.986 } 00:13:36.986 ]' 00:13:36.986 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.246 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.505 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:37.505 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.072 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.640 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.934 00:13:38.934 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.934 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.934 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.207 { 00:13:39.207 "cntlid": 115, 00:13:39.207 "qid": 0, 00:13:39.207 "state": "enabled", 00:13:39.207 "thread": "nvmf_tgt_poll_group_000", 00:13:39.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:39.207 "listen_address": { 00:13:39.207 "trtype": "TCP", 00:13:39.207 "adrfam": "IPv4", 00:13:39.207 "traddr": "10.0.0.3", 00:13:39.207 "trsvcid": "4420" 00:13:39.207 }, 00:13:39.207 "peer_address": { 00:13:39.207 "trtype": "TCP", 00:13:39.207 "adrfam": "IPv4", 00:13:39.207 "traddr": "10.0.0.1", 00:13:39.207 "trsvcid": "36174" 00:13:39.207 }, 00:13:39.207 "auth": { 00:13:39.207 "state": "completed", 00:13:39.207 "digest": "sha512", 00:13:39.207 "dhgroup": "ffdhe3072" 00:13:39.207 } 00:13:39.207 } 00:13:39.207 ]' 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.207 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.474 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.474 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.474 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.474 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:39.474 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:40.411 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:40.669 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:40.669 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.670 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.929 00:13:40.929 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.929 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.929 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.496 { 00:13:41.496 "cntlid": 117, 00:13:41.496 "qid": 0, 00:13:41.496 "state": "enabled", 00:13:41.496 "thread": "nvmf_tgt_poll_group_000", 00:13:41.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:41.496 "listen_address": { 00:13:41.496 "trtype": "TCP", 00:13:41.496 "adrfam": "IPv4", 00:13:41.496 "traddr": "10.0.0.3", 00:13:41.496 "trsvcid": "4420" 00:13:41.496 }, 00:13:41.496 "peer_address": { 00:13:41.496 "trtype": "TCP", 00:13:41.496 "adrfam": "IPv4", 00:13:41.496 "traddr": "10.0.0.1", 00:13:41.496 "trsvcid": "39156" 00:13:41.496 }, 00:13:41.496 "auth": { 00:13:41.496 "state": "completed", 00:13:41.496 "digest": "sha512", 00:13:41.496 "dhgroup": "ffdhe3072" 00:13:41.496 } 00:13:41.496 } 00:13:41.496 ]' 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.496 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.497 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.497 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.497 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.497 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.756 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:41.756 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.693 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.261 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.261 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.520 { 00:13:43.520 "cntlid": 119, 00:13:43.520 "qid": 0, 00:13:43.520 "state": "enabled", 00:13:43.520 "thread": "nvmf_tgt_poll_group_000", 00:13:43.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:43.520 "listen_address": { 00:13:43.520 "trtype": "TCP", 00:13:43.520 "adrfam": "IPv4", 00:13:43.520 "traddr": "10.0.0.3", 00:13:43.520 "trsvcid": "4420" 00:13:43.520 }, 00:13:43.520 "peer_address": { 00:13:43.520 "trtype": "TCP", 00:13:43.520 "adrfam": "IPv4", 00:13:43.520 "traddr": "10.0.0.1", 00:13:43.520 "trsvcid": "39182" 00:13:43.520 }, 00:13:43.520 "auth": { 00:13:43.520 "state": "completed", 00:13:43.520 "digest": "sha512", 00:13:43.520 "dhgroup": "ffdhe3072" 00:13:43.520 } 00:13:43.520 } 00:13:43.520 ]' 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.520 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.779 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:43.779 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.717 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.976 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:44.976 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.977 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.977 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.977 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.977 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.977 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.236 00:13:45.236 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.236 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.236 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.495 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.495 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.495 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.495 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.755 { 00:13:45.755 "cntlid": 121, 00:13:45.755 "qid": 0, 00:13:45.755 "state": "enabled", 00:13:45.755 "thread": "nvmf_tgt_poll_group_000", 00:13:45.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:45.755 "listen_address": { 00:13:45.755 "trtype": "TCP", 00:13:45.755 "adrfam": "IPv4", 00:13:45.755 "traddr": "10.0.0.3", 00:13:45.755 "trsvcid": "4420" 00:13:45.755 }, 00:13:45.755 "peer_address": { 00:13:45.755 "trtype": "TCP", 00:13:45.755 "adrfam": "IPv4", 00:13:45.755 "traddr": "10.0.0.1", 00:13:45.755 "trsvcid": "39216" 00:13:45.755 }, 00:13:45.755 "auth": { 00:13:45.755 "state": "completed", 00:13:45.755 "digest": "sha512", 00:13:45.755 "dhgroup": "ffdhe4096" 00:13:45.755 } 00:13:45.755 } 00:13:45.755 ]' 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.755 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.014 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:46.014 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:46.582 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.150 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:47.150 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.150 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.150 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.150 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.151 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.410 00:13:47.410 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.410 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.410 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.670 { 00:13:47.670 "cntlid": 123, 00:13:47.670 "qid": 0, 00:13:47.670 "state": "enabled", 00:13:47.670 "thread": "nvmf_tgt_poll_group_000", 00:13:47.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:47.670 "listen_address": { 00:13:47.670 "trtype": "TCP", 00:13:47.670 "adrfam": "IPv4", 00:13:47.670 "traddr": "10.0.0.3", 00:13:47.670 "trsvcid": "4420" 00:13:47.670 }, 00:13:47.670 "peer_address": { 00:13:47.670 "trtype": "TCP", 00:13:47.670 "adrfam": "IPv4", 00:13:47.670 "traddr": "10.0.0.1", 00:13:47.670 "trsvcid": "39236" 00:13:47.670 }, 00:13:47.670 "auth": { 00:13:47.670 "state": "completed", 00:13:47.670 "digest": "sha512", 00:13:47.670 "dhgroup": "ffdhe4096" 00:13:47.670 } 00:13:47.670 } 00:13:47.670 ]' 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.670 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.929 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:47.929 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.929 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.929 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.929 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.188 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:48.188 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:48.757 10:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.016 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.584 00:13:49.584 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.584 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.584 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.843 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.843 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.843 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.843 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.843 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.843 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.843 { 00:13:49.843 "cntlid": 125, 00:13:49.843 "qid": 0, 00:13:49.843 "state": "enabled", 00:13:49.843 "thread": "nvmf_tgt_poll_group_000", 00:13:49.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:49.843 "listen_address": { 00:13:49.843 "trtype": "TCP", 00:13:49.843 "adrfam": "IPv4", 00:13:49.843 "traddr": "10.0.0.3", 00:13:49.843 "trsvcid": "4420" 00:13:49.843 }, 00:13:49.843 "peer_address": { 00:13:49.843 "trtype": "TCP", 00:13:49.843 "adrfam": "IPv4", 00:13:49.843 "traddr": "10.0.0.1", 00:13:49.843 "trsvcid": "39278" 00:13:49.843 }, 00:13:49.843 "auth": { 00:13:49.843 "state": "completed", 00:13:49.843 "digest": "sha512", 00:13:49.843 "dhgroup": "ffdhe4096" 00:13:49.843 } 00:13:49.843 } 00:13:49.843 ]' 00:13:49.844 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.844 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.844 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.844 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.844 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.844 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.844 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.844 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.102 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:50.102 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:51.115 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.115 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:51.115 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.115 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.115 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.682 00:13:51.682 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.682 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.682 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.941 { 00:13:51.941 "cntlid": 127, 00:13:51.941 "qid": 0, 00:13:51.941 "state": "enabled", 00:13:51.941 "thread": "nvmf_tgt_poll_group_000", 00:13:51.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:51.941 "listen_address": { 00:13:51.941 "trtype": "TCP", 00:13:51.941 "adrfam": "IPv4", 00:13:51.941 "traddr": "10.0.0.3", 00:13:51.941 "trsvcid": "4420" 00:13:51.941 }, 00:13:51.941 "peer_address": { 00:13:51.941 "trtype": "TCP", 00:13:51.941 "adrfam": "IPv4", 00:13:51.941 "traddr": "10.0.0.1", 00:13:51.941 "trsvcid": "49114" 00:13:51.941 }, 00:13:51.941 "auth": { 00:13:51.941 "state": "completed", 00:13:51.941 "digest": "sha512", 00:13:51.941 "dhgroup": "ffdhe4096" 00:13:51.941 } 00:13:51.941 } 00:13:51.941 ]' 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.941 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.201 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.201 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.201 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.460 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:52.460 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:53.027 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.286 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.287 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.287 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.287 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.855 00:13:53.855 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.855 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.855 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.114 { 00:13:54.114 "cntlid": 129, 00:13:54.114 "qid": 0, 00:13:54.114 "state": "enabled", 00:13:54.114 "thread": "nvmf_tgt_poll_group_000", 00:13:54.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:54.114 "listen_address": { 00:13:54.114 "trtype": "TCP", 00:13:54.114 "adrfam": "IPv4", 00:13:54.114 "traddr": "10.0.0.3", 00:13:54.114 "trsvcid": "4420" 00:13:54.114 }, 00:13:54.114 "peer_address": { 00:13:54.114 "trtype": "TCP", 00:13:54.114 "adrfam": "IPv4", 00:13:54.114 "traddr": "10.0.0.1", 00:13:54.114 "trsvcid": "49134" 00:13:54.114 }, 00:13:54.114 "auth": { 00:13:54.114 "state": "completed", 00:13:54.114 "digest": "sha512", 00:13:54.114 "dhgroup": "ffdhe6144" 00:13:54.114 } 00:13:54.114 } 00:13:54.114 ]' 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.372 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.372 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.372 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.372 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.372 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.630 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:54.630 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:55.197 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.455 10:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.021 00:13:56.021 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.021 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.021 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.279 { 00:13:56.279 "cntlid": 131, 00:13:56.279 "qid": 0, 00:13:56.279 "state": "enabled", 00:13:56.279 "thread": "nvmf_tgt_poll_group_000", 00:13:56.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:56.279 "listen_address": { 00:13:56.279 "trtype": "TCP", 00:13:56.279 "adrfam": "IPv4", 00:13:56.279 "traddr": "10.0.0.3", 00:13:56.279 "trsvcid": "4420" 00:13:56.279 }, 00:13:56.279 "peer_address": { 00:13:56.279 "trtype": "TCP", 00:13:56.279 "adrfam": "IPv4", 00:13:56.279 "traddr": "10.0.0.1", 00:13:56.279 "trsvcid": "49178" 00:13:56.279 }, 00:13:56.279 "auth": { 00:13:56.279 "state": "completed", 00:13:56.279 "digest": "sha512", 00:13:56.279 "dhgroup": "ffdhe6144" 00:13:56.279 } 00:13:56.279 } 00:13:56.279 ]' 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.279 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.538 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.538 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.538 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.797 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:56.797 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:57.363 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.931 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.189 00:13:58.189 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.189 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.189 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.447 { 00:13:58.447 "cntlid": 133, 00:13:58.447 "qid": 0, 00:13:58.447 "state": "enabled", 00:13:58.447 "thread": "nvmf_tgt_poll_group_000", 00:13:58.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:13:58.447 "listen_address": { 00:13:58.447 "trtype": "TCP", 00:13:58.447 "adrfam": "IPv4", 00:13:58.447 "traddr": "10.0.0.3", 00:13:58.447 "trsvcid": "4420" 00:13:58.447 }, 00:13:58.447 "peer_address": { 00:13:58.447 "trtype": "TCP", 00:13:58.447 "adrfam": "IPv4", 00:13:58.447 "traddr": "10.0.0.1", 00:13:58.447 "trsvcid": "49208" 00:13:58.447 }, 00:13:58.447 "auth": { 00:13:58.447 "state": "completed", 00:13:58.447 "digest": "sha512", 00:13:58.447 "dhgroup": "ffdhe6144" 00:13:58.447 } 00:13:58.447 } 00:13:58.447 ]' 00:13:58.447 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.706 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.964 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:58.964 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:59.529 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.788 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.355 00:14:00.355 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.355 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.355 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.613 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.613 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.613 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.613 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.871 { 00:14:00.871 "cntlid": 135, 00:14:00.871 "qid": 0, 00:14:00.871 "state": "enabled", 00:14:00.871 "thread": "nvmf_tgt_poll_group_000", 00:14:00.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:00.871 "listen_address": { 00:14:00.871 "trtype": "TCP", 00:14:00.871 "adrfam": "IPv4", 00:14:00.871 "traddr": "10.0.0.3", 00:14:00.871 "trsvcid": "4420" 00:14:00.871 }, 00:14:00.871 "peer_address": { 00:14:00.871 "trtype": "TCP", 00:14:00.871 "adrfam": "IPv4", 00:14:00.871 "traddr": "10.0.0.1", 00:14:00.871 "trsvcid": "49226" 00:14:00.871 }, 00:14:00.871 "auth": { 00:14:00.871 "state": "completed", 00:14:00.871 "digest": "sha512", 00:14:00.871 "dhgroup": "ffdhe6144" 00:14:00.871 } 00:14:00.871 } 00:14:00.871 ]' 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:00.871 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.871 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.871 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.871 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.129 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:01.129 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.063 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.322 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.262 00:14:03.262 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.262 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.262 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.528 { 00:14:03.528 "cntlid": 137, 00:14:03.528 "qid": 0, 00:14:03.528 "state": "enabled", 00:14:03.528 "thread": "nvmf_tgt_poll_group_000", 00:14:03.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:03.528 "listen_address": { 00:14:03.528 "trtype": "TCP", 00:14:03.528 "adrfam": "IPv4", 00:14:03.528 "traddr": "10.0.0.3", 00:14:03.528 "trsvcid": "4420" 00:14:03.528 }, 00:14:03.528 "peer_address": { 00:14:03.528 "trtype": "TCP", 00:14:03.528 "adrfam": "IPv4", 00:14:03.528 "traddr": "10.0.0.1", 00:14:03.528 "trsvcid": "45086" 00:14:03.528 }, 00:14:03.528 "auth": { 00:14:03.528 "state": "completed", 00:14:03.528 "digest": "sha512", 00:14:03.528 "dhgroup": "ffdhe8192" 00:14:03.528 } 00:14:03.528 } 00:14:03.528 ]' 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.528 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.787 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.787 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.787 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.045 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:14:04.045 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:04.980 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.239 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.866 00:14:05.866 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.866 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.866 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.124 { 00:14:06.124 "cntlid": 139, 00:14:06.124 "qid": 0, 00:14:06.124 "state": "enabled", 00:14:06.124 "thread": "nvmf_tgt_poll_group_000", 00:14:06.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:06.124 "listen_address": { 00:14:06.124 "trtype": "TCP", 00:14:06.124 "adrfam": "IPv4", 00:14:06.124 "traddr": "10.0.0.3", 00:14:06.124 "trsvcid": "4420" 00:14:06.124 }, 00:14:06.124 "peer_address": { 00:14:06.124 "trtype": "TCP", 00:14:06.124 "adrfam": "IPv4", 00:14:06.124 "traddr": "10.0.0.1", 00:14:06.124 "trsvcid": "45124" 00:14:06.124 }, 00:14:06.124 "auth": { 00:14:06.124 "state": "completed", 00:14:06.124 "digest": "sha512", 00:14:06.124 "dhgroup": "ffdhe8192" 00:14:06.124 } 00:14:06.124 } 00:14:06.124 ]' 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.124 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.690 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:14:06.690 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: --dhchap-ctrl-secret DHHC-1:02:NjA3OGQ4MTVmYmE1MDAyMDg4ZTEwM2NhODYxZGNmZmM1OWVlNzc3ODk5OTYwMzJjXunlhg==: 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.256 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.514 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.451 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.451 { 00:14:08.451 "cntlid": 141, 00:14:08.451 "qid": 0, 00:14:08.451 "state": "enabled", 00:14:08.451 "thread": "nvmf_tgt_poll_group_000", 00:14:08.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:08.451 "listen_address": { 00:14:08.451 "trtype": "TCP", 00:14:08.451 "adrfam": "IPv4", 00:14:08.451 "traddr": "10.0.0.3", 00:14:08.451 "trsvcid": "4420" 00:14:08.451 }, 00:14:08.451 "peer_address": { 00:14:08.451 "trtype": "TCP", 00:14:08.451 "adrfam": "IPv4", 00:14:08.451 "traddr": "10.0.0.1", 00:14:08.451 "trsvcid": "45164" 00:14:08.451 }, 00:14:08.451 "auth": { 00:14:08.451 "state": "completed", 00:14:08.451 "digest": "sha512", 00:14:08.451 "dhgroup": "ffdhe8192" 00:14:08.451 } 00:14:08.451 } 00:14:08.451 ]' 00:14:08.451 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.710 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.969 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:14:08.969 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:01:ZTM4ZTczZGMyMjk0ZmMyYWIyYmY4NzdkZDkxMTU3MTXXedXt: 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:09.904 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.162 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.729 00:14:10.729 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.729 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.729 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.296 { 00:14:11.296 "cntlid": 143, 00:14:11.296 "qid": 0, 00:14:11.296 "state": "enabled", 00:14:11.296 "thread": "nvmf_tgt_poll_group_000", 00:14:11.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:11.296 "listen_address": { 00:14:11.296 "trtype": "TCP", 00:14:11.296 "adrfam": "IPv4", 00:14:11.296 "traddr": "10.0.0.3", 00:14:11.296 "trsvcid": "4420" 00:14:11.296 }, 00:14:11.296 "peer_address": { 00:14:11.296 "trtype": "TCP", 00:14:11.296 "adrfam": "IPv4", 00:14:11.296 "traddr": "10.0.0.1", 00:14:11.296 "trsvcid": "45190" 00:14:11.296 }, 00:14:11.296 "auth": { 00:14:11.296 "state": "completed", 00:14:11.296 "digest": "sha512", 00:14:11.296 "dhgroup": "ffdhe8192" 00:14:11.296 } 00:14:11.296 } 00:14:11.296 ]' 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.296 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.554 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:11.554 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:12.488 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.489 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.056 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.056 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.623 00:14:13.623 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.623 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.623 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.882 { 00:14:13.882 "cntlid": 145, 00:14:13.882 "qid": 0, 00:14:13.882 "state": "enabled", 00:14:13.882 "thread": "nvmf_tgt_poll_group_000", 00:14:13.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:13.882 "listen_address": { 00:14:13.882 "trtype": "TCP", 00:14:13.882 "adrfam": "IPv4", 00:14:13.882 "traddr": "10.0.0.3", 00:14:13.882 "trsvcid": "4420" 00:14:13.882 }, 00:14:13.882 "peer_address": { 00:14:13.882 "trtype": "TCP", 00:14:13.882 "adrfam": "IPv4", 00:14:13.882 "traddr": "10.0.0.1", 00:14:13.882 "trsvcid": "56566" 00:14:13.882 }, 00:14:13.882 "auth": { 00:14:13.882 "state": "completed", 00:14:13.882 "digest": "sha512", 00:14:13.882 "dhgroup": "ffdhe8192" 00:14:13.882 } 00:14:13.882 } 00:14:13.882 ]' 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.882 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.141 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.141 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.142 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.142 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.142 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.401 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:14:14.401 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:00:ZDNjM2U4YjQzZTE2MjgzZWNhNjE5NTFiMjQ4NzM0MWQzNzYxNzUwNGZiMzU5NWIxmG5JfQ==: --dhchap-ctrl-secret DHHC-1:03:NDQ2ZWE4Yjk3NjJmZTVkNjRkYWFhY2U1NjAzOWJiNjYzYjliZTM3OWI0OTJhM2E4MmUyODNhNDRiODgyYmVhM5pdrcY=: 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:14.968 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:15.535 request: 00:14:15.535 { 00:14:15.535 "name": "nvme0", 00:14:15.535 "trtype": "tcp", 00:14:15.535 "traddr": "10.0.0.3", 00:14:15.535 "adrfam": "ipv4", 00:14:15.535 "trsvcid": "4420", 00:14:15.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:15.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:15.535 "prchk_reftag": false, 00:14:15.535 "prchk_guard": false, 00:14:15.535 "hdgst": false, 00:14:15.535 "ddgst": false, 00:14:15.535 "dhchap_key": "key2", 00:14:15.535 "allow_unrecognized_csi": false, 00:14:15.535 "method": "bdev_nvme_attach_controller", 00:14:15.535 "req_id": 1 00:14:15.535 } 00:14:15.535 Got JSON-RPC error response 00:14:15.535 response: 00:14:15.535 { 00:14:15.535 "code": -5, 00:14:15.535 "message": "Input/output error" 00:14:15.535 } 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.535 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:16.112 request: 00:14:16.112 { 00:14:16.112 "name": "nvme0", 00:14:16.112 "trtype": "tcp", 00:14:16.112 "traddr": "10.0.0.3", 00:14:16.112 "adrfam": "ipv4", 00:14:16.112 "trsvcid": "4420", 00:14:16.112 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:16.112 "prchk_reftag": false, 00:14:16.112 "prchk_guard": false, 00:14:16.112 "hdgst": false, 00:14:16.112 "ddgst": false, 00:14:16.112 "dhchap_key": "key1", 00:14:16.112 "dhchap_ctrlr_key": "ckey2", 00:14:16.112 "allow_unrecognized_csi": false, 00:14:16.112 "method": "bdev_nvme_attach_controller", 00:14:16.112 "req_id": 1 00:14:16.112 } 00:14:16.112 Got JSON-RPC error response 00:14:16.112 response: 00:14:16.112 { 00:14:16.112 "code": -5, 00:14:16.112 "message": "Input/output error" 00:14:16.112 } 00:14:16.112 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:16.112 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.112 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.112 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.113 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.732 request: 00:14:16.732 { 00:14:16.732 "name": "nvme0", 00:14:16.732 "trtype": "tcp", 00:14:16.732 "traddr": "10.0.0.3", 00:14:16.732 "adrfam": "ipv4", 00:14:16.732 "trsvcid": "4420", 00:14:16.732 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:16.732 "prchk_reftag": false, 00:14:16.732 "prchk_guard": false, 00:14:16.732 "hdgst": false, 00:14:16.732 "ddgst": false, 00:14:16.732 "dhchap_key": "key1", 00:14:16.732 "dhchap_ctrlr_key": "ckey1", 00:14:16.732 "allow_unrecognized_csi": false, 00:14:16.732 "method": "bdev_nvme_attach_controller", 00:14:16.732 "req_id": 1 00:14:16.732 } 00:14:16.732 Got JSON-RPC error response 00:14:16.732 response: 00:14:16.732 { 00:14:16.732 "code": -5, 00:14:16.732 "message": "Input/output error" 00:14:16.732 } 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 80038 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 80038 ']' 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 80038 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.991 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80038 00:14:16.991 killing process with pid 80038 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80038' 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 80038 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 80038 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=83103 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 83103 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 83103 ']' 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.991 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:17.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:17.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.249 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83103 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 83103 ']' 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.508 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.767 null0 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SzX 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.aLl ]] 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aLl 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.767 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.T5T 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hvJ ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hvJ 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tZ1 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1m4 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1m4 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uN6 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.768 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.704 nvme0n1 00:14:18.962 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.962 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.962 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.222 { 00:14:19.222 "cntlid": 1, 00:14:19.222 "qid": 0, 00:14:19.222 "state": "enabled", 00:14:19.222 "thread": "nvmf_tgt_poll_group_000", 00:14:19.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:19.222 "listen_address": { 00:14:19.222 "trtype": "TCP", 00:14:19.222 "adrfam": "IPv4", 00:14:19.222 "traddr": "10.0.0.3", 00:14:19.222 "trsvcid": "4420" 00:14:19.222 }, 00:14:19.222 "peer_address": { 00:14:19.222 "trtype": "TCP", 00:14:19.222 "adrfam": "IPv4", 00:14:19.222 "traddr": "10.0.0.1", 00:14:19.222 "trsvcid": "56628" 00:14:19.222 }, 00:14:19.222 "auth": { 00:14:19.222 "state": "completed", 00:14:19.222 "digest": "sha512", 00:14:19.222 "dhgroup": "ffdhe8192" 00:14:19.222 } 00:14:19.222 } 00:14:19.222 ]' 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.222 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.481 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:19.481 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key3 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:20.416 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.674 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.933 request: 00:14:20.933 { 00:14:20.933 "name": "nvme0", 00:14:20.933 "trtype": "tcp", 00:14:20.933 "traddr": "10.0.0.3", 00:14:20.933 "adrfam": "ipv4", 00:14:20.933 "trsvcid": "4420", 00:14:20.933 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:20.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:20.933 "prchk_reftag": false, 00:14:20.933 "prchk_guard": false, 00:14:20.933 "hdgst": false, 00:14:20.933 "ddgst": false, 00:14:20.933 "dhchap_key": "key3", 00:14:20.933 "allow_unrecognized_csi": false, 00:14:20.933 "method": "bdev_nvme_attach_controller", 00:14:20.933 "req_id": 1 00:14:20.933 } 00:14:20.933 Got JSON-RPC error response 00:14:20.933 response: 00:14:20.933 { 00:14:20.933 "code": -5, 00:14:20.933 "message": "Input/output error" 00:14:20.933 } 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:20.933 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.192 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.451 request: 00:14:21.451 { 00:14:21.451 "name": "nvme0", 00:14:21.451 "trtype": "tcp", 00:14:21.451 "traddr": "10.0.0.3", 00:14:21.451 "adrfam": "ipv4", 00:14:21.451 "trsvcid": "4420", 00:14:21.451 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:21.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:21.451 "prchk_reftag": false, 00:14:21.451 "prchk_guard": false, 00:14:21.451 "hdgst": false, 00:14:21.451 "ddgst": false, 00:14:21.451 "dhchap_key": "key3", 00:14:21.451 "allow_unrecognized_csi": false, 00:14:21.451 "method": "bdev_nvme_attach_controller", 00:14:21.451 "req_id": 1 00:14:21.451 } 00:14:21.451 Got JSON-RPC error response 00:14:21.451 response: 00:14:21.451 { 00:14:21.451 "code": -5, 00:14:21.451 "message": "Input/output error" 00:14:21.451 } 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:21.451 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:21.710 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.276 request: 00:14:22.276 { 00:14:22.276 "name": "nvme0", 00:14:22.276 "trtype": "tcp", 00:14:22.276 "traddr": "10.0.0.3", 00:14:22.276 "adrfam": "ipv4", 00:14:22.276 "trsvcid": "4420", 00:14:22.276 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:22.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:22.276 "prchk_reftag": false, 00:14:22.276 "prchk_guard": false, 00:14:22.276 "hdgst": false, 00:14:22.276 "ddgst": false, 00:14:22.276 "dhchap_key": "key0", 00:14:22.276 "dhchap_ctrlr_key": "key1", 00:14:22.276 "allow_unrecognized_csi": false, 00:14:22.276 "method": "bdev_nvme_attach_controller", 00:14:22.276 "req_id": 1 00:14:22.276 } 00:14:22.276 Got JSON-RPC error response 00:14:22.276 response: 00:14:22.276 { 00:14:22.276 "code": -5, 00:14:22.276 "message": "Input/output error" 00:14:22.276 } 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:22.276 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:22.535 nvme0n1 00:14:22.535 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:22.535 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.535 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:22.794 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.794 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.794 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:23.052 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:23.986 nvme0n1 00:14:23.986 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:23.986 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.986 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.245 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:24.503 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.503 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:24.503 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid 495b1d55-bad1-4013-8ca4-4675b1022b7a -l 0 --dhchap-secret DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: --dhchap-ctrl-secret DHHC-1:03:ODUwZTMyOWEzMDZjYTJiNTk5YmE1MzU5ZDIyYTUxZDQ0ZGQ2MmIyZTFhZWY2ZWE4OTg5YjA2ZGUwNzk1NTRjOdxNiKg=: 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.070 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:25.637 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:26.203 request: 00:14:26.203 { 00:14:26.203 "name": "nvme0", 00:14:26.203 "trtype": "tcp", 00:14:26.203 "traddr": "10.0.0.3", 00:14:26.203 "adrfam": "ipv4", 00:14:26.203 "trsvcid": "4420", 00:14:26.203 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:26.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a", 00:14:26.203 "prchk_reftag": false, 00:14:26.203 "prchk_guard": false, 00:14:26.203 "hdgst": false, 00:14:26.203 "ddgst": false, 00:14:26.203 "dhchap_key": "key1", 00:14:26.203 "allow_unrecognized_csi": false, 00:14:26.203 "method": "bdev_nvme_attach_controller", 00:14:26.203 "req_id": 1 00:14:26.203 } 00:14:26.203 Got JSON-RPC error response 00:14:26.203 response: 00:14:26.203 { 00:14:26.203 "code": -5, 00:14:26.203 "message": "Input/output error" 00:14:26.203 } 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:26.203 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:27.164 nvme0n1 00:14:27.164 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:27.164 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.164 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:27.423 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.423 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.423 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:27.681 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:27.940 nvme0n1 00:14:27.940 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:27.940 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.940 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:28.504 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.504 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.504 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: '' 2s 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: ]] 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmM3ODQ1ZWVkNzFiYjBmZGY3YzIwNTJiZjE1YWU3MjVT0klx: 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:28.763 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: 2s 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: ]] 00:14:30.664 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTAzN2Y4NzE5ZDFhYTFiOThkMjM2ZjRjMmU5Njk1OTZkZjdkM2NlMzkxNDRkYjhmVc1Y5w==: 00:14:30.665 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:30.665 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.195 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.762 nvme0n1 00:14:33.762 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:33.762 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.762 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.762 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.762 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:33.762 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:34.698 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:35.265 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:35.265 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:35.265 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:35.524 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.090 request: 00:14:36.090 { 00:14:36.090 "name": "nvme0", 00:14:36.090 "dhchap_key": "key1", 00:14:36.090 "dhchap_ctrlr_key": "key3", 00:14:36.090 "method": "bdev_nvme_set_keys", 00:14:36.090 "req_id": 1 00:14:36.090 } 00:14:36.090 Got JSON-RPC error response 00:14:36.090 response: 00:14:36.090 { 00:14:36.090 "code": -13, 00:14:36.090 "message": "Permission denied" 00:14:36.090 } 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.090 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:36.349 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:36.349 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:37.725 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:38.660 nvme0n1 00:14:38.660 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:38.660 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.660 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.660 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:38.661 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.596 request: 00:14:39.596 { 00:14:39.596 "name": "nvme0", 00:14:39.596 "dhchap_key": "key2", 00:14:39.596 "dhchap_ctrlr_key": "key0", 00:14:39.596 "method": "bdev_nvme_set_keys", 00:14:39.596 "req_id": 1 00:14:39.596 } 00:14:39.596 Got JSON-RPC error response 00:14:39.596 response: 00:14:39.596 { 00:14:39.596 "code": -13, 00:14:39.596 "message": "Permission denied" 00:14:39.596 } 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:39.596 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.854 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:39.854 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:40.793 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:40.793 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:40.793 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80057 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 80057 ']' 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 80057 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80057 00:14:41.052 killing process with pid 80057 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80057' 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 80057 00:14:41.052 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 80057 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:41.310 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:41.310 rmmod nvme_tcp 00:14:41.568 rmmod nvme_fabrics 00:14:41.568 rmmod nvme_keyring 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 83103 ']' 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 83103 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 83103 ']' 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 83103 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83103 00:14:41.569 killing process with pid 83103 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83103' 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 83103 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 83103 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:41.569 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.828 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.828 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:41.828 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SzX /tmp/spdk.key-sha256.T5T /tmp/spdk.key-sha384.tZ1 /tmp/spdk.key-sha512.uN6 /tmp/spdk.key-sha512.aLl /tmp/spdk.key-sha384.hvJ /tmp/spdk.key-sha256.1m4 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:41.828 00:14:41.828 real 3m10.095s 00:14:41.828 user 7m35.020s 00:14:41.828 sys 0m29.809s 00:14:41.828 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.828 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.828 ************************************ 00:14:41.828 END TEST nvmf_auth_target 00:14:41.828 ************************************ 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.087 ************************************ 00:14:42.087 START TEST nvmf_bdevio_no_huge 00:14:42.087 ************************************ 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:42.087 * Looking for test storage... 00:14:42.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:42.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.087 --rc genhtml_branch_coverage=1 00:14:42.087 --rc genhtml_function_coverage=1 00:14:42.087 --rc genhtml_legend=1 00:14:42.087 --rc geninfo_all_blocks=1 00:14:42.087 --rc geninfo_unexecuted_blocks=1 00:14:42.087 00:14:42.087 ' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:42.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.087 --rc genhtml_branch_coverage=1 00:14:42.087 --rc genhtml_function_coverage=1 00:14:42.087 --rc genhtml_legend=1 00:14:42.087 --rc geninfo_all_blocks=1 00:14:42.087 --rc geninfo_unexecuted_blocks=1 00:14:42.087 00:14:42.087 ' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:42.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.087 --rc genhtml_branch_coverage=1 00:14:42.087 --rc genhtml_function_coverage=1 00:14:42.087 --rc genhtml_legend=1 00:14:42.087 --rc geninfo_all_blocks=1 00:14:42.087 --rc geninfo_unexecuted_blocks=1 00:14:42.087 00:14:42.087 ' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:42.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.087 --rc genhtml_branch_coverage=1 00:14:42.087 --rc genhtml_function_coverage=1 00:14:42.087 --rc genhtml_legend=1 00:14:42.087 --rc geninfo_all_blocks=1 00:14:42.087 --rc geninfo_unexecuted_blocks=1 00:14:42.087 00:14:42.087 ' 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.087 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.347 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:42.348 Cannot find device "nvmf_init_br" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:42.348 Cannot find device "nvmf_init_br2" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:42.348 Cannot find device "nvmf_tgt_br" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.348 Cannot find device "nvmf_tgt_br2" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:42.348 Cannot find device "nvmf_init_br" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:42.348 Cannot find device "nvmf_init_br2" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:42.348 Cannot find device "nvmf_tgt_br" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:42.348 Cannot find device "nvmf_tgt_br2" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:42.348 Cannot find device "nvmf_br" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:42.348 Cannot find device "nvmf_init_if" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:42.348 Cannot find device "nvmf_init_if2" 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.348 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:42.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:42.607 00:14:42.607 --- 10.0.0.3 ping statistics --- 00:14:42.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.607 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:42.607 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:42.607 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:14:42.607 00:14:42.607 --- 10.0.0.4 ping statistics --- 00:14:42.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.607 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:42.607 00:14:42.607 --- 10.0.0.1 ping statistics --- 00:14:42.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.607 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:42.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:14:42.607 00:14:42.607 --- 10.0.0.2 ping statistics --- 00:14:42.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.607 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=83741 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 83741 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 83741 ']' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.607 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:42.608 [2024-12-10 10:28:17.815188] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:42.608 [2024-12-10 10:28:17.815323] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:42.866 [2024-12-10 10:28:17.962751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.866 [2024-12-10 10:28:18.065546] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.866 [2024-12-10 10:28:18.065610] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.866 [2024-12-10 10:28:18.065630] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.866 [2024-12-10 10:28:18.065640] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.866 [2024-12-10 10:28:18.065649] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.866 [2024-12-10 10:28:18.065820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:42.866 [2024-12-10 10:28:18.065870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:42.866 [2024-12-10 10:28:18.066007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:42.866 [2024-12-10 10:28:18.066539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.866 [2024-12-10 10:28:18.072359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.802 [2024-12-10 10:28:18.910080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.802 Malloc0 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.802 [2024-12-10 10:28:18.950261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:43.802 { 00:14:43.802 "params": { 00:14:43.802 "name": "Nvme$subsystem", 00:14:43.802 "trtype": "$TEST_TRANSPORT", 00:14:43.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:43.802 "adrfam": "ipv4", 00:14:43.802 "trsvcid": "$NVMF_PORT", 00:14:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:43.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:43.802 "hdgst": ${hdgst:-false}, 00:14:43.802 "ddgst": ${ddgst:-false} 00:14:43.802 }, 00:14:43.802 "method": "bdev_nvme_attach_controller" 00:14:43.802 } 00:14:43.802 EOF 00:14:43.802 )") 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:43.802 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:43.802 "params": { 00:14:43.802 "name": "Nvme1", 00:14:43.802 "trtype": "tcp", 00:14:43.803 "traddr": "10.0.0.3", 00:14:43.803 "adrfam": "ipv4", 00:14:43.803 "trsvcid": "4420", 00:14:43.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.803 "hdgst": false, 00:14:43.803 "ddgst": false 00:14:43.803 }, 00:14:43.803 "method": "bdev_nvme_attach_controller" 00:14:43.803 }' 00:14:43.803 [2024-12-10 10:28:19.008480] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:43.803 [2024-12-10 10:28:19.008592] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83783 ] 00:14:44.061 [2024-12-10 10:28:19.142859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:44.061 [2024-12-10 10:28:19.248120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.061 [2024-12-10 10:28:19.248281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.061 [2024-12-10 10:28:19.248274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.061 [2024-12-10 10:28:19.263315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.320 I/O targets: 00:14:44.320 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:44.320 00:14:44.320 00:14:44.320 CUnit - A unit testing framework for C - Version 2.1-3 00:14:44.320 http://cunit.sourceforge.net/ 00:14:44.320 00:14:44.320 00:14:44.320 Suite: bdevio tests on: Nvme1n1 00:14:44.321 Test: blockdev write read block ...passed 00:14:44.321 Test: blockdev write zeroes read block ...passed 00:14:44.321 Test: blockdev write zeroes read no split ...passed 00:14:44.321 Test: blockdev write zeroes read split ...passed 00:14:44.321 Test: blockdev write zeroes read split partial ...passed 00:14:44.321 Test: blockdev reset ...[2024-12-10 10:28:19.483825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:44.321 [2024-12-10 10:28:19.483941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a26a0 (9): Bad file descriptor 00:14:44.321 [2024-12-10 10:28:19.501978] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:44.321 passed 00:14:44.321 Test: blockdev write read 8 blocks ...passed 00:14:44.321 Test: blockdev write read size > 128k ...passed 00:14:44.321 Test: blockdev write read invalid size ...passed 00:14:44.321 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:44.321 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:44.321 Test: blockdev write read max offset ...passed 00:14:44.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:44.321 Test: blockdev writev readv 8 blocks ...passed 00:14:44.321 Test: blockdev writev readv 30 x 1block ...passed 00:14:44.321 Test: blockdev writev readv block ...passed 00:14:44.321 Test: blockdev writev readv size > 128k ...passed 00:14:44.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:44.321 Test: blockdev comparev and writev ...[2024-12-10 10:28:19.510152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.510230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.510251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.510261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.510562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.510581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.510598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.510608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.510918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.510934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.510949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.510959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.511216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.511231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.511247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:44.321 [2024-12-10 10:28:19.511256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:44.321 passed 00:14:44.321 Test: blockdev nvme passthru rw ...passed 00:14:44.321 Test: blockdev nvme passthru vendor specific ...[2024-12-10 10:28:19.512137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:44.321 [2024-12-10 10:28:19.512162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.512270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:44.321 [2024-12-10 10:28:19.512286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:44.321 [2024-12-10 10:28:19.512418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:44.321 [2024-12-10 10:28:19.512447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:44.321 passed 00:14:44.321 Test: blockdev nvme admin passthru ...[2024-12-10 10:28:19.512551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:44.321 [2024-12-10 10:28:19.512572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:44.321 passed 00:14:44.321 Test: blockdev copy ...passed 00:14:44.321 00:14:44.321 Run Summary: Type Total Ran Passed Failed Inactive 00:14:44.321 suites 1 1 n/a 0 0 00:14:44.321 tests 23 23 23 0 0 00:14:44.321 asserts 152 152 152 0 n/a 00:14:44.321 00:14:44.321 Elapsed time = 0.161 seconds 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.888 rmmod nvme_tcp 00:14:44.888 rmmod nvme_fabrics 00:14:44.888 rmmod nvme_keyring 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 83741 ']' 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 83741 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 83741 ']' 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 83741 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83741 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:44.888 killing process with pid 83741 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83741' 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 83741 00:14:44.888 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 83741 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.148 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:45.407 00:14:45.407 real 0m3.483s 00:14:45.407 user 0m10.384s 00:14:45.407 sys 0m1.385s 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.407 ************************************ 00:14:45.407 END TEST nvmf_bdevio_no_huge 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.407 ************************************ 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.407 ************************************ 00:14:45.407 START TEST nvmf_tls 00:14:45.407 ************************************ 00:14:45.407 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:45.667 * Looking for test storage... 00:14:45.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:45.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.667 --rc genhtml_branch_coverage=1 00:14:45.667 --rc genhtml_function_coverage=1 00:14:45.667 --rc genhtml_legend=1 00:14:45.667 --rc geninfo_all_blocks=1 00:14:45.667 --rc geninfo_unexecuted_blocks=1 00:14:45.667 00:14:45.667 ' 00:14:45.667 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:45.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.668 --rc genhtml_branch_coverage=1 00:14:45.668 --rc genhtml_function_coverage=1 00:14:45.668 --rc genhtml_legend=1 00:14:45.668 --rc geninfo_all_blocks=1 00:14:45.668 --rc geninfo_unexecuted_blocks=1 00:14:45.668 00:14:45.668 ' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:45.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.668 --rc genhtml_branch_coverage=1 00:14:45.668 --rc genhtml_function_coverage=1 00:14:45.668 --rc genhtml_legend=1 00:14:45.668 --rc geninfo_all_blocks=1 00:14:45.668 --rc geninfo_unexecuted_blocks=1 00:14:45.668 00:14:45.668 ' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:45.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.668 --rc genhtml_branch_coverage=1 00:14:45.668 --rc genhtml_function_coverage=1 00:14:45.668 --rc genhtml_legend=1 00:14:45.668 --rc geninfo_all_blocks=1 00:14:45.668 --rc geninfo_unexecuted_blocks=1 00:14:45.668 00:14:45.668 ' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:45.668 Cannot find device "nvmf_init_br" 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:45.668 Cannot find device "nvmf_init_br2" 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:45.668 Cannot find device "nvmf_tgt_br" 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.668 Cannot find device "nvmf_tgt_br2" 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:45.668 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:45.927 Cannot find device "nvmf_init_br" 00:14:45.927 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.928 Cannot find device "nvmf_init_br2" 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.928 Cannot find device "nvmf_tgt_br" 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:45.928 Cannot find device "nvmf_tgt_br2" 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:45.928 Cannot find device "nvmf_br" 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:45.928 Cannot find device "nvmf_init_if" 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:45.928 Cannot find device "nvmf_init_if2" 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:45.928 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.928 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:46.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:14:46.187 00:14:46.187 --- 10.0.0.3 ping statistics --- 00:14:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.187 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:46.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:46.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:14:46.187 00:14:46.187 --- 10.0.0.4 ping statistics --- 00:14:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.187 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:46.187 00:14:46.187 --- 10.0.0.1 ping statistics --- 00:14:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.187 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:46.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:46.187 00:14:46.187 --- 10.0.0.2 ping statistics --- 00:14:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.187 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84014 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:46.187 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84014 00:14:46.188 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84014 ']' 00:14:46.188 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.188 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.188 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.188 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.188 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.188 [2024-12-10 10:28:21.305376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:46.188 [2024-12-10 10:28:21.305510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.446 [2024-12-10 10:28:21.446197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.446 [2024-12-10 10:28:21.487661] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.446 [2024-12-10 10:28:21.487726] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.446 [2024-12-10 10:28:21.487739] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.446 [2024-12-10 10:28:21.487748] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.446 [2024-12-10 10:28:21.487757] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.446 [2024-12-10 10:28:21.487791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:46.446 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:47.013 true 00:14:47.013 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:47.013 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:47.271 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:47.271 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:47.271 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:47.271 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:47.271 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:47.838 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:47.838 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:47.838 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:48.096 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.096 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:48.355 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:48.355 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:48.355 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.355 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:48.613 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:48.613 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:48.613 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:48.875 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.875 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:49.444 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:49.444 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:49.444 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:49.702 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:49.702 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:49.960 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZUEoGohFGy 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lj9Yp8yjza 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZUEoGohFGy 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lj9Yp8yjza 00:14:50.218 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:50.479 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:50.746 [2024-12-10 10:28:25.857194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.746 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZUEoGohFGy 00:14:50.746 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZUEoGohFGy 00:14:50.746 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:51.005 [2024-12-10 10:28:26.193481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.005 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:51.263 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:51.522 [2024-12-10 10:28:26.685670] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.522 [2024-12-10 10:28:26.685989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:51.522 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:51.780 malloc0 00:14:51.780 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:52.039 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZUEoGohFGy 00:14:52.298 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.556 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZUEoGohFGy 00:15:04.763 Initializing NVMe Controllers 00:15:04.763 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.763 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.763 Initialization complete. Launching workers. 00:15:04.763 ======================================================== 00:15:04.763 Latency(us) 00:15:04.763 Device Information : IOPS MiB/s Average min max 00:15:04.763 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9677.08 37.80 6614.86 1353.66 12612.77 00:15:04.763 ======================================================== 00:15:04.763 Total : 9677.08 37.80 6614.86 1353.66 12612.77 00:15:04.763 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUEoGohFGy 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZUEoGohFGy 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84250 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84250 /var/tmp/bdevperf.sock 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84250 ']' 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.763 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.763 [2024-12-10 10:28:37.998819] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:04.763 [2024-12-10 10:28:37.998929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84250 ] 00:15:04.763 [2024-12-10 10:28:38.133351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.763 [2024-12-10 10:28:38.173699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.763 [2024-12-10 10:28:38.207553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.763 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.763 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:04.763 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUEoGohFGy 00:15:04.763 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.763 [2024-12-10 10:28:38.783437] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.763 TLSTESTn1 00:15:04.763 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:04.763 Running I/O for 10 seconds... 00:15:06.139 4409.00 IOPS, 17.22 MiB/s [2024-12-10T10:28:42.302Z] 4412.00 IOPS, 17.23 MiB/s [2024-12-10T10:28:43.239Z] 4441.00 IOPS, 17.35 MiB/s [2024-12-10T10:28:44.175Z] 4398.50 IOPS, 17.18 MiB/s [2024-12-10T10:28:45.110Z] 4303.80 IOPS, 16.81 MiB/s [2024-12-10T10:28:46.042Z] 4244.00 IOPS, 16.58 MiB/s [2024-12-10T10:28:47.002Z] 4186.29 IOPS, 16.35 MiB/s [2024-12-10T10:28:48.378Z] 4164.00 IOPS, 16.27 MiB/s [2024-12-10T10:28:49.316Z] 4148.33 IOPS, 16.20 MiB/s [2024-12-10T10:28:49.316Z] 4144.90 IOPS, 16.19 MiB/s 00:15:14.089 Latency(us) 00:15:14.089 [2024-12-10T10:28:49.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.089 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:14.089 Verification LBA range: start 0x0 length 0x2000 00:15:14.089 TLSTESTn1 : 10.02 4150.53 16.21 0.00 0.00 30783.67 5779.08 25380.31 00:15:14.089 [2024-12-10T10:28:49.316Z] =================================================================================================================== 00:15:14.089 [2024-12-10T10:28:49.316Z] Total : 4150.53 16.21 0.00 0.00 30783.67 5779.08 25380.31 00:15:14.089 { 00:15:14.089 "results": [ 00:15:14.089 { 00:15:14.089 "job": "TLSTESTn1", 00:15:14.089 "core_mask": "0x4", 00:15:14.089 "workload": "verify", 00:15:14.089 "status": "finished", 00:15:14.089 "verify_range": { 00:15:14.089 "start": 0, 00:15:14.089 "length": 8192 00:15:14.089 }, 00:15:14.089 "queue_depth": 128, 00:15:14.089 "io_size": 4096, 00:15:14.089 "runtime": 10.017046, 00:15:14.089 "iops": 4150.52501505933, 00:15:14.089 "mibps": 16.21298834007551, 00:15:14.089 "io_failed": 0, 00:15:14.089 "io_timeout": 0, 00:15:14.089 "avg_latency_us": 30783.668664789126, 00:15:14.089 "min_latency_us": 5779.083636363636, 00:15:14.089 "max_latency_us": 25380.305454545454 00:15:14.089 } 00:15:14.089 ], 00:15:14.089 "core_count": 1 00:15:14.089 } 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84250 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84250 ']' 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84250 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84250 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:14.089 killing process with pid 84250 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84250' 00:15:14.089 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.089 00:15:14.089 Latency(us) 00:15:14.089 [2024-12-10T10:28:49.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.089 [2024-12-10T10:28:49.316Z] =================================================================================================================== 00:15:14.089 [2024-12-10T10:28:49.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84250 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84250 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lj9Yp8yjza 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lj9Yp8yjza 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lj9Yp8yjza 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lj9Yp8yjza 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:14.089 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84377 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84377 /var/tmp/bdevperf.sock 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84377 ']' 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.090 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.090 [2024-12-10 10:28:49.248603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:14.090 [2024-12-10 10:28:49.248687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84377 ] 00:15:14.349 [2024-12-10 10:28:49.387548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.349 [2024-12-10 10:28:49.428498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.349 [2024-12-10 10:28:49.460743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.349 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.349 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:14.349 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lj9Yp8yjza 00:15:14.607 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:14.865 [2024-12-10 10:28:50.046072] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.865 [2024-12-10 10:28:50.057116] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:14.865 [2024-12-10 10:28:50.057795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668550 (107): Transport endpoint is not connected 00:15:14.865 [2024-12-10 10:28:50.058779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668550 (9): Bad file descriptor 00:15:14.865 [2024-12-10 10:28:50.059775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:14.865 [2024-12-10 10:28:50.059800] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:14.865 [2024-12-10 10:28:50.059811] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:14.865 [2024-12-10 10:28:50.059827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:14.865 request: 00:15:14.865 { 00:15:14.865 "name": "TLSTEST", 00:15:14.865 "trtype": "tcp", 00:15:14.865 "traddr": "10.0.0.3", 00:15:14.865 "adrfam": "ipv4", 00:15:14.865 "trsvcid": "4420", 00:15:14.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.865 "prchk_reftag": false, 00:15:14.865 "prchk_guard": false, 00:15:14.865 "hdgst": false, 00:15:14.865 "ddgst": false, 00:15:14.865 "psk": "key0", 00:15:14.865 "allow_unrecognized_csi": false, 00:15:14.865 "method": "bdev_nvme_attach_controller", 00:15:14.865 "req_id": 1 00:15:14.865 } 00:15:14.865 Got JSON-RPC error response 00:15:14.865 response: 00:15:14.865 { 00:15:14.865 "code": -5, 00:15:14.865 "message": "Input/output error" 00:15:14.865 } 00:15:14.865 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84377 00:15:14.865 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84377 ']' 00:15:14.865 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84377 00:15:14.865 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:14.865 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.865 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84377 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:15.124 killing process with pid 84377 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84377' 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84377 00:15:15.124 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.124 00:15:15.124 Latency(us) 00:15:15.124 [2024-12-10T10:28:50.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.124 [2024-12-10T10:28:50.351Z] =================================================================================================================== 00:15:15.124 [2024-12-10T10:28:50.351Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84377 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZUEoGohFGy 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZUEoGohFGy 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZUEoGohFGy 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZUEoGohFGy 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84398 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84398 /var/tmp/bdevperf.sock 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84398 ']' 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.124 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.125 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.125 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.125 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.125 [2024-12-10 10:28:50.310193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:15.125 [2024-12-10 10:28:50.310284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84398 ] 00:15:15.383 [2024-12-10 10:28:50.450590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.383 [2024-12-10 10:28:50.492899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.383 [2024-12-10 10:28:50.526787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.383 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.383 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:15.383 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUEoGohFGy 00:15:15.948 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:16.207 [2024-12-10 10:28:51.185110] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:16.207 [2024-12-10 10:28:51.190052] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:16.207 [2024-12-10 10:28:51.190091] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:16.207 [2024-12-10 10:28:51.190149] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:16.207 [2024-12-10 10:28:51.190777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2306550 (107): Transport endpoint is not connected 00:15:16.207 [2024-12-10 10:28:51.191762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2306550 (9): Bad file descriptor 00:15:16.207 [2024-12-10 10:28:51.192757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:16.207 [2024-12-10 10:28:51.192780] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:16.207 [2024-12-10 10:28:51.192791] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:16.207 [2024-12-10 10:28:51.192806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.207 request: 00:15:16.207 { 00:15:16.207 "name": "TLSTEST", 00:15:16.207 "trtype": "tcp", 00:15:16.207 "traddr": "10.0.0.3", 00:15:16.207 "adrfam": "ipv4", 00:15:16.207 "trsvcid": "4420", 00:15:16.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.207 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:16.207 "prchk_reftag": false, 00:15:16.207 "prchk_guard": false, 00:15:16.207 "hdgst": false, 00:15:16.207 "ddgst": false, 00:15:16.207 "psk": "key0", 00:15:16.207 "allow_unrecognized_csi": false, 00:15:16.207 "method": "bdev_nvme_attach_controller", 00:15:16.207 "req_id": 1 00:15:16.207 } 00:15:16.207 Got JSON-RPC error response 00:15:16.207 response: 00:15:16.207 { 00:15:16.207 "code": -5, 00:15:16.207 "message": "Input/output error" 00:15:16.207 } 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84398 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84398 ']' 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84398 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84398 00:15:16.207 killing process with pid 84398 00:15:16.207 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.207 00:15:16.207 Latency(us) 00:15:16.207 [2024-12-10T10:28:51.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.207 [2024-12-10T10:28:51.434Z] =================================================================================================================== 00:15:16.207 [2024-12-10T10:28:51.434Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84398' 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84398 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84398 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUEoGohFGy 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUEoGohFGy 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUEoGohFGy 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZUEoGohFGy 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84419 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84419 /var/tmp/bdevperf.sock 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84419 ']' 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.207 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.465 [2024-12-10 10:28:51.444414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:16.465 [2024-12-10 10:28:51.444515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84419 ] 00:15:16.465 [2024-12-10 10:28:51.582322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.465 [2024-12-10 10:28:51.625449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.465 [2024-12-10 10:28:51.659599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.723 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.723 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:16.723 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUEoGohFGy 00:15:16.981 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:17.240 [2024-12-10 10:28:52.326060] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.240 [2024-12-10 10:28:52.337191] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:17.240 [2024-12-10 10:28:52.337245] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:17.240 [2024-12-10 10:28:52.337309] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:17.240 [2024-12-10 10:28:52.337687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1012550 (107): Transport endpoint is not connected 00:15:17.240 [2024-12-10 10:28:52.338676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1012550 (9): Bad file descriptor 00:15:17.240 [2024-12-10 10:28:52.339673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:17.240 [2024-12-10 10:28:52.339703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:17.240 [2024-12-10 10:28:52.339715] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:15:17.240 [2024-12-10 10:28:52.339731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:17.240 request: 00:15:17.240 { 00:15:17.240 "name": "TLSTEST", 00:15:17.240 "trtype": "tcp", 00:15:17.240 "traddr": "10.0.0.3", 00:15:17.240 "adrfam": "ipv4", 00:15:17.240 "trsvcid": "4420", 00:15:17.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:17.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.240 "prchk_reftag": false, 00:15:17.240 "prchk_guard": false, 00:15:17.240 "hdgst": false, 00:15:17.240 "ddgst": false, 00:15:17.240 "psk": "key0", 00:15:17.240 "allow_unrecognized_csi": false, 00:15:17.240 "method": "bdev_nvme_attach_controller", 00:15:17.240 "req_id": 1 00:15:17.240 } 00:15:17.240 Got JSON-RPC error response 00:15:17.240 response: 00:15:17.240 { 00:15:17.240 "code": -5, 00:15:17.240 "message": "Input/output error" 00:15:17.240 } 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84419 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84419 ']' 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84419 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84419 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:17.240 killing process with pid 84419 00:15:17.240 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.240 00:15:17.240 Latency(us) 00:15:17.240 [2024-12-10T10:28:52.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.240 [2024-12-10T10:28:52.467Z] =================================================================================================================== 00:15:17.240 [2024-12-10T10:28:52.467Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:17.240 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84419' 00:15:17.241 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84419 00:15:17.241 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84419 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:17.499 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84440 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84440 /var/tmp/bdevperf.sock 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84440 ']' 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.500 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.500 [2024-12-10 10:28:52.604278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:17.500 [2024-12-10 10:28:52.604370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84440 ] 00:15:17.758 [2024-12-10 10:28:52.743416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.758 [2024-12-10 10:28:52.780470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.758 [2024-12-10 10:28:52.811012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.758 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.758 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:17.758 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:15:18.017 [2024-12-10 10:28:53.163610] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:15:18.017 [2024-12-10 10:28:53.163684] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:18.017 request: 00:15:18.017 { 00:15:18.017 "name": "key0", 00:15:18.017 "path": "", 00:15:18.017 "method": "keyring_file_add_key", 00:15:18.017 "req_id": 1 00:15:18.017 } 00:15:18.017 Got JSON-RPC error response 00:15:18.017 response: 00:15:18.017 { 00:15:18.017 "code": -1, 00:15:18.017 "message": "Operation not permitted" 00:15:18.017 } 00:15:18.017 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:18.276 [2024-12-10 10:28:53.407784] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.276 [2024-12-10 10:28:53.407856] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:18.276 request: 00:15:18.276 { 00:15:18.276 "name": "TLSTEST", 00:15:18.276 "trtype": "tcp", 00:15:18.276 "traddr": "10.0.0.3", 00:15:18.276 "adrfam": "ipv4", 00:15:18.276 "trsvcid": "4420", 00:15:18.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.276 "prchk_reftag": false, 00:15:18.276 "prchk_guard": false, 00:15:18.276 "hdgst": false, 00:15:18.276 "ddgst": false, 00:15:18.276 "psk": "key0", 00:15:18.276 "allow_unrecognized_csi": false, 00:15:18.276 "method": "bdev_nvme_attach_controller", 00:15:18.276 "req_id": 1 00:15:18.276 } 00:15:18.276 Got JSON-RPC error response 00:15:18.276 response: 00:15:18.276 { 00:15:18.276 "code": -126, 00:15:18.276 "message": "Required key not available" 00:15:18.276 } 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84440 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84440 ']' 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84440 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84440 00:15:18.276 killing process with pid 84440 00:15:18.276 Received shutdown signal, test time was about 10.000000 seconds 00:15:18.276 00:15:18.276 Latency(us) 00:15:18.276 [2024-12-10T10:28:53.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.276 [2024-12-10T10:28:53.503Z] =================================================================================================================== 00:15:18.276 [2024-12-10T10:28:53.503Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84440' 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84440 00:15:18.276 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84440 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84014 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84014 ']' 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84014 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84014 00:15:18.535 killing process with pid 84014 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84014' 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84014 00:15:18.535 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84014 00:15:18.794 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:18.794 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Ie9mA82GKo 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Ie9mA82GKo 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84477 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84477 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84477 ']' 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.795 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.795 [2024-12-10 10:28:53.915059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:18.795 [2024-12-10 10:28:53.915159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.053 [2024-12-10 10:28:54.055682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.053 [2024-12-10 10:28:54.097508] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.054 [2024-12-10 10:28:54.097578] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.054 [2024-12-10 10:28:54.097592] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.054 [2024-12-10 10:28:54.097602] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.054 [2024-12-10 10:28:54.097611] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.054 [2024-12-10 10:28:54.097645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.054 [2024-12-10 10:28:54.134883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Ie9mA82GKo 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ie9mA82GKo 00:15:19.054 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:19.312 [2024-12-10 10:28:54.530777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.571 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:19.829 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:20.098 [2024-12-10 10:28:55.078921] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.098 [2024-12-10 10:28:55.079205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.098 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:20.357 malloc0 00:15:20.358 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:20.616 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:20.875 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ie9mA82GKo 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ie9mA82GKo 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:20.875 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84525 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84525 /var/tmp/bdevperf.sock 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84525 ']' 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.133 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.134 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.134 [2024-12-10 10:28:56.148562] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:21.134 [2024-12-10 10:28:56.148810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84525 ] 00:15:21.134 [2024-12-10 10:28:56.287330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.134 [2024-12-10 10:28:56.330691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.391 [2024-12-10 10:28:56.364913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.391 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.391 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:21.391 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:21.649 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:21.907 [2024-12-10 10:28:56.887907] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:21.907 TLSTESTn1 00:15:21.907 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:21.907 Running I/O for 10 seconds... 00:15:24.245 4048.00 IOPS, 15.81 MiB/s [2024-12-10T10:29:00.406Z] 4194.50 IOPS, 16.38 MiB/s [2024-12-10T10:29:01.339Z] 3989.00 IOPS, 15.58 MiB/s [2024-12-10T10:29:02.274Z] 3899.75 IOPS, 15.23 MiB/s [2024-12-10T10:29:03.209Z] 3982.40 IOPS, 15.56 MiB/s [2024-12-10T10:29:04.144Z] 3982.83 IOPS, 15.56 MiB/s [2024-12-10T10:29:05.519Z] 4027.57 IOPS, 15.73 MiB/s [2024-12-10T10:29:06.452Z] 4042.38 IOPS, 15.79 MiB/s [2024-12-10T10:29:07.386Z] 4068.89 IOPS, 15.89 MiB/s [2024-12-10T10:29:07.386Z] 4078.00 IOPS, 15.93 MiB/s 00:15:32.159 Latency(us) 00:15:32.159 [2024-12-10T10:29:07.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.159 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:32.159 Verification LBA range: start 0x0 length 0x2000 00:15:32.159 TLSTESTn1 : 10.01 4084.92 15.96 0.00 0.00 31281.61 4825.83 25022.84 00:15:32.159 [2024-12-10T10:29:07.386Z] =================================================================================================================== 00:15:32.159 [2024-12-10T10:29:07.386Z] Total : 4084.92 15.96 0.00 0.00 31281.61 4825.83 25022.84 00:15:32.159 { 00:15:32.159 "results": [ 00:15:32.159 { 00:15:32.159 "job": "TLSTESTn1", 00:15:32.159 "core_mask": "0x4", 00:15:32.159 "workload": "verify", 00:15:32.159 "status": "finished", 00:15:32.159 "verify_range": { 00:15:32.159 "start": 0, 00:15:32.159 "length": 8192 00:15:32.159 }, 00:15:32.159 "queue_depth": 128, 00:15:32.159 "io_size": 4096, 00:15:32.159 "runtime": 10.014406, 00:15:32.159 "iops": 4084.9152710605103, 00:15:32.159 "mibps": 15.956700277580119, 00:15:32.159 "io_failed": 0, 00:15:32.159 "io_timeout": 0, 00:15:32.159 "avg_latency_us": 31281.60825195338, 00:15:32.159 "min_latency_us": 4825.832727272727, 00:15:32.159 "max_latency_us": 25022.836363636365 00:15:32.159 } 00:15:32.159 ], 00:15:32.159 "core_count": 1 00:15:32.159 } 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84525 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84525 ']' 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84525 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84525 00:15:32.159 killing process with pid 84525 00:15:32.159 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.159 00:15:32.159 Latency(us) 00:15:32.159 [2024-12-10T10:29:07.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.159 [2024-12-10T10:29:07.386Z] =================================================================================================================== 00:15:32.159 [2024-12-10T10:29:07.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84525' 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84525 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84525 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Ie9mA82GKo 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ie9mA82GKo 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ie9mA82GKo 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ie9mA82GKo 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ie9mA82GKo 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84653 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84653 /var/tmp/bdevperf.sock 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84653 ']' 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.159 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.418 [2024-12-10 10:29:07.413255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:32.418 [2024-12-10 10:29:07.413659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84653 ] 00:15:32.418 [2024-12-10 10:29:07.559309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.418 [2024-12-10 10:29:07.595258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.418 [2024-12-10 10:29:07.624734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.675 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.675 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:32.675 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:32.933 [2024-12-10 10:29:07.929053] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ie9mA82GKo': 0100666 00:15:32.933 [2024-12-10 10:29:07.929296] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:32.933 request: 00:15:32.933 { 00:15:32.933 "name": "key0", 00:15:32.933 "path": "/tmp/tmp.Ie9mA82GKo", 00:15:32.933 "method": "keyring_file_add_key", 00:15:32.933 "req_id": 1 00:15:32.933 } 00:15:32.933 Got JSON-RPC error response 00:15:32.933 response: 00:15:32.933 { 00:15:32.933 "code": -1, 00:15:32.933 "message": "Operation not permitted" 00:15:32.933 } 00:15:32.933 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:33.191 [2024-12-10 10:29:08.197205] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.191 [2024-12-10 10:29:08.197292] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:33.191 request: 00:15:33.191 { 00:15:33.191 "name": "TLSTEST", 00:15:33.191 "trtype": "tcp", 00:15:33.191 "traddr": "10.0.0.3", 00:15:33.191 "adrfam": "ipv4", 00:15:33.191 "trsvcid": "4420", 00:15:33.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.191 "prchk_reftag": false, 00:15:33.191 "prchk_guard": false, 00:15:33.191 "hdgst": false, 00:15:33.191 "ddgst": false, 00:15:33.191 "psk": "key0", 00:15:33.191 "allow_unrecognized_csi": false, 00:15:33.191 "method": "bdev_nvme_attach_controller", 00:15:33.191 "req_id": 1 00:15:33.191 } 00:15:33.191 Got JSON-RPC error response 00:15:33.191 response: 00:15:33.191 { 00:15:33.191 "code": -126, 00:15:33.191 "message": "Required key not available" 00:15:33.191 } 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84653 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84653 ']' 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84653 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84653 00:15:33.191 killing process with pid 84653 00:15:33.191 Received shutdown signal, test time was about 10.000000 seconds 00:15:33.191 00:15:33.191 Latency(us) 00:15:33.191 [2024-12-10T10:29:08.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.191 [2024-12-10T10:29:08.418Z] =================================================================================================================== 00:15:33.191 [2024-12-10T10:29:08.418Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84653' 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84653 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84653 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 84477 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84477 ']' 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84477 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.191 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84477 00:15:33.449 killing process with pid 84477 00:15:33.449 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.449 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.449 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84477' 00:15:33.449 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84477 00:15:33.449 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84477 00:15:33.449 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84679 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84679 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84679 ']' 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.450 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.450 [2024-12-10 10:29:08.625122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:33.450 [2024-12-10 10:29:08.625218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.708 [2024-12-10 10:29:08.760126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.708 [2024-12-10 10:29:08.793958] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.708 [2024-12-10 10:29:08.794014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.708 [2024-12-10 10:29:08.794040] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.708 [2024-12-10 10:29:08.794047] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.708 [2024-12-10 10:29:08.794054] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.708 [2024-12-10 10:29:08.794082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.708 [2024-12-10 10:29:08.822662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Ie9mA82GKo 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ie9mA82GKo 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Ie9mA82GKo 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ie9mA82GKo 00:15:33.708 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:34.274 [2024-12-10 10:29:09.206615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.274 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.531 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:34.789 [2024-12-10 10:29:09.770772] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.789 [2024-12-10 10:29:09.771042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.789 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:35.047 malloc0 00:15:35.047 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.306 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:35.564 [2024-12-10 10:29:10.535044] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ie9mA82GKo': 0100666 00:15:35.564 [2024-12-10 10:29:10.535089] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:35.564 request: 00:15:35.564 { 00:15:35.564 "name": "key0", 00:15:35.564 "path": "/tmp/tmp.Ie9mA82GKo", 00:15:35.564 "method": "keyring_file_add_key", 00:15:35.564 "req_id": 1 00:15:35.564 } 00:15:35.564 Got JSON-RPC error response 00:15:35.564 response: 00:15:35.564 { 00:15:35.564 "code": -1, 00:15:35.564 "message": "Operation not permitted" 00:15:35.564 } 00:15:35.564 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:35.564 [2024-12-10 10:29:10.783127] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:35.564 [2024-12-10 10:29:10.783223] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:35.825 request: 00:15:35.825 { 00:15:35.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.825 "host": "nqn.2016-06.io.spdk:host1", 00:15:35.825 "psk": "key0", 00:15:35.825 "method": "nvmf_subsystem_add_host", 00:15:35.825 "req_id": 1 00:15:35.825 } 00:15:35.825 Got JSON-RPC error response 00:15:35.825 response: 00:15:35.825 { 00:15:35.825 "code": -32603, 00:15:35.825 "message": "Internal error" 00:15:35.825 } 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84679 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84679 ']' 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84679 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84679 00:15:35.825 killing process with pid 84679 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84679' 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84679 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84679 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Ie9mA82GKo 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84735 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84735 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84735 ']' 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.825 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.103 [2024-12-10 10:29:11.065844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:36.103 [2024-12-10 10:29:11.065941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.103 [2024-12-10 10:29:11.199762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.103 [2024-12-10 10:29:11.236234] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.103 [2024-12-10 10:29:11.236571] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.103 [2024-12-10 10:29:11.236607] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.104 [2024-12-10 10:29:11.236615] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.104 [2024-12-10 10:29:11.236621] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.104 [2024-12-10 10:29:11.236651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.104 [2024-12-10 10:29:11.268314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.104 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.104 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:36.104 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:36.104 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.104 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.363 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Ie9mA82GKo 00:15:36.363 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ie9mA82GKo 00:15:36.363 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:36.622 [2024-12-10 10:29:11.589832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.622 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:36.622 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:36.881 [2024-12-10 10:29:12.081976] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:36.881 [2024-12-10 10:29:12.082164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:36.881 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:37.140 malloc0 00:15:37.140 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:37.707 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:37.707 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:37.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84789 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84789 /var/tmp/bdevperf.sock 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84789 ']' 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.965 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.965 [2024-12-10 10:29:13.189440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:37.965 [2024-12-10 10:29:13.189782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84789 ] 00:15:38.223 [2024-12-10 10:29:13.334311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.223 [2024-12-10 10:29:13.379460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.223 [2024-12-10 10:29:13.415758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.159 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.159 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:39.159 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:39.417 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:39.676 [2024-12-10 10:29:14.671608] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:39.676 TLSTESTn1 00:15:39.676 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:39.935 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:39.935 "subsystems": [ 00:15:39.935 { 00:15:39.935 "subsystem": "keyring", 00:15:39.935 "config": [ 00:15:39.935 { 00:15:39.935 "method": "keyring_file_add_key", 00:15:39.935 "params": { 00:15:39.935 "name": "key0", 00:15:39.935 "path": "/tmp/tmp.Ie9mA82GKo" 00:15:39.935 } 00:15:39.935 } 00:15:39.935 ] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "iobuf", 00:15:39.935 "config": [ 00:15:39.935 { 00:15:39.935 "method": "iobuf_set_options", 00:15:39.935 "params": { 00:15:39.935 "small_pool_count": 8192, 00:15:39.935 "large_pool_count": 1024, 00:15:39.935 "small_bufsize": 8192, 00:15:39.935 "large_bufsize": 135168 00:15:39.935 } 00:15:39.935 } 00:15:39.935 ] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "sock", 00:15:39.935 "config": [ 00:15:39.935 { 00:15:39.935 "method": "sock_set_default_impl", 00:15:39.935 "params": { 00:15:39.935 "impl_name": "uring" 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "sock_impl_set_options", 00:15:39.935 "params": { 00:15:39.935 "impl_name": "ssl", 00:15:39.935 "recv_buf_size": 4096, 00:15:39.935 "send_buf_size": 4096, 00:15:39.935 "enable_recv_pipe": true, 00:15:39.935 "enable_quickack": false, 00:15:39.935 "enable_placement_id": 0, 00:15:39.935 "enable_zerocopy_send_server": true, 00:15:39.935 "enable_zerocopy_send_client": false, 00:15:39.935 "zerocopy_threshold": 0, 00:15:39.935 "tls_version": 0, 00:15:39.935 "enable_ktls": false 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "sock_impl_set_options", 00:15:39.935 "params": { 00:15:39.935 "impl_name": "posix", 00:15:39.935 "recv_buf_size": 2097152, 00:15:39.935 "send_buf_size": 2097152, 00:15:39.935 "enable_recv_pipe": true, 00:15:39.935 "enable_quickack": false, 00:15:39.935 "enable_placement_id": 0, 00:15:39.935 "enable_zerocopy_send_server": true, 00:15:39.935 "enable_zerocopy_send_client": false, 00:15:39.935 "zerocopy_threshold": 0, 00:15:39.935 "tls_version": 0, 00:15:39.935 "enable_ktls": false 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "sock_impl_set_options", 00:15:39.935 "params": { 00:15:39.935 "impl_name": "uring", 00:15:39.935 "recv_buf_size": 2097152, 00:15:39.935 "send_buf_size": 2097152, 00:15:39.935 "enable_recv_pipe": true, 00:15:39.935 "enable_quickack": false, 00:15:39.935 "enable_placement_id": 0, 00:15:39.935 "enable_zerocopy_send_server": false, 00:15:39.935 "enable_zerocopy_send_client": false, 00:15:39.935 "zerocopy_threshold": 0, 00:15:39.935 "tls_version": 0, 00:15:39.935 "enable_ktls": false 00:15:39.935 } 00:15:39.935 } 00:15:39.935 ] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "vmd", 00:15:39.935 "config": [] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "accel", 00:15:39.935 "config": [ 00:15:39.935 { 00:15:39.935 "method": "accel_set_options", 00:15:39.935 "params": { 00:15:39.935 "small_cache_size": 128, 00:15:39.935 "large_cache_size": 16, 00:15:39.935 "task_count": 2048, 00:15:39.935 "sequence_count": 2048, 00:15:39.935 "buf_count": 2048 00:15:39.935 } 00:15:39.935 } 00:15:39.935 ] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "bdev", 00:15:39.935 "config": [ 00:15:39.935 { 00:15:39.935 "method": "bdev_set_options", 00:15:39.935 "params": { 00:15:39.935 "bdev_io_pool_size": 65535, 00:15:39.935 "bdev_io_cache_size": 256, 00:15:39.935 "bdev_auto_examine": true, 00:15:39.935 "iobuf_small_cache_size": 128, 00:15:39.935 "iobuf_large_cache_size": 16 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "bdev_raid_set_options", 00:15:39.935 "params": { 00:15:39.935 "process_window_size_kb": 1024, 00:15:39.935 "process_max_bandwidth_mb_sec": 0 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "bdev_iscsi_set_options", 00:15:39.935 "params": { 00:15:39.935 "timeout_sec": 30 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "bdev_nvme_set_options", 00:15:39.935 "params": { 00:15:39.935 "action_on_timeout": "none", 00:15:39.935 "timeout_us": 0, 00:15:39.935 "timeout_admin_us": 0, 00:15:39.935 "keep_alive_timeout_ms": 10000, 00:15:39.935 "arbitration_burst": 0, 00:15:39.935 "low_priority_weight": 0, 00:15:39.935 "medium_priority_weight": 0, 00:15:39.935 "high_priority_weight": 0, 00:15:39.935 "nvme_adminq_poll_period_us": 10000, 00:15:39.935 "nvme_ioq_poll_period_us": 0, 00:15:39.935 "io_queue_requests": 0, 00:15:39.935 "delay_cmd_submit": true, 00:15:39.935 "transport_retry_count": 4, 00:15:39.935 "bdev_retry_count": 3, 00:15:39.935 "transport_ack_timeout": 0, 00:15:39.935 "ctrlr_loss_timeout_sec": 0, 00:15:39.935 "reconnect_delay_sec": 0, 00:15:39.935 "fast_io_fail_timeout_sec": 0, 00:15:39.935 "disable_auto_failback": false, 00:15:39.935 "generate_uuids": false, 00:15:39.935 "transport_tos": 0, 00:15:39.935 "nvme_error_stat": false, 00:15:39.935 "rdma_srq_size": 0, 00:15:39.935 "io_path_stat": false, 00:15:39.935 "allow_accel_sequence": false, 00:15:39.935 "rdma_max_cq_size": 0, 00:15:39.935 "rdma_cm_event_timeout_ms": 0, 00:15:39.935 "dhchap_digests": [ 00:15:39.935 "sha256", 00:15:39.935 "sha384", 00:15:39.935 "sha512" 00:15:39.935 ], 00:15:39.935 "dhchap_dhgroups": [ 00:15:39.935 "null", 00:15:39.935 "ffdhe2048", 00:15:39.935 "ffdhe3072", 00:15:39.935 "ffdhe4096", 00:15:39.935 "ffdhe6144", 00:15:39.935 "ffdhe8192" 00:15:39.935 ] 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "bdev_nvme_set_hotplug", 00:15:39.935 "params": { 00:15:39.935 "period_us": 100000, 00:15:39.935 "enable": false 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "bdev_malloc_create", 00:15:39.935 "params": { 00:15:39.935 "name": "malloc0", 00:15:39.935 "num_blocks": 8192, 00:15:39.935 "block_size": 4096, 00:15:39.935 "physical_block_size": 4096, 00:15:39.935 "uuid": "6821a463-eeec-4a22-95a3-e2e9a34fd313", 00:15:39.935 "optimal_io_boundary": 0, 00:15:39.935 "md_size": 0, 00:15:39.935 "dif_type": 0, 00:15:39.935 "dif_is_head_of_md": false, 00:15:39.935 "dif_pi_format": 0 00:15:39.935 } 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "method": "bdev_wait_for_examine" 00:15:39.935 } 00:15:39.935 ] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "nbd", 00:15:39.935 "config": [] 00:15:39.935 }, 00:15:39.935 { 00:15:39.935 "subsystem": "scheduler", 00:15:39.935 "config": [ 00:15:39.935 { 00:15:39.935 "method": "framework_set_scheduler", 00:15:39.935 "params": { 00:15:39.935 "name": "static" 00:15:39.935 } 00:15:39.936 } 00:15:39.936 ] 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "subsystem": "nvmf", 00:15:39.936 "config": [ 00:15:39.936 { 00:15:39.936 "method": "nvmf_set_config", 00:15:39.936 "params": { 00:15:39.936 "discovery_filter": "match_any", 00:15:39.936 "admin_cmd_passthru": { 00:15:39.936 "identify_ctrlr": false 00:15:39.936 }, 00:15:39.936 "dhchap_digests": [ 00:15:39.936 "sha256", 00:15:39.936 "sha384", 00:15:39.936 "sha512" 00:15:39.936 ], 00:15:39.936 "dhchap_dhgroups": [ 00:15:39.936 "null", 00:15:39.936 "ffdhe2048", 00:15:39.936 "ffdhe3072", 00:15:39.936 "ffdhe4096", 00:15:39.936 "ffdhe6144", 00:15:39.936 "ffdhe8192" 00:15:39.936 ] 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_set_max_subsystems", 00:15:39.936 "params": { 00:15:39.936 "max_subsystems": 1024 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_set_crdt", 00:15:39.936 "params": { 00:15:39.936 "crdt1": 0, 00:15:39.936 "crdt2": 0, 00:15:39.936 "crdt3": 0 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_create_transport", 00:15:39.936 "params": { 00:15:39.936 "trtype": "TCP", 00:15:39.936 "max_queue_depth": 128, 00:15:39.936 "max_io_qpairs_per_ctrlr": 127, 00:15:39.936 "in_capsule_data_size": 4096, 00:15:39.936 "max_io_size": 131072, 00:15:39.936 "io_unit_size": 131072, 00:15:39.936 "max_aq_depth": 128, 00:15:39.936 "num_shared_buffers": 511, 00:15:39.936 "buf_cache_size": 4294967295, 00:15:39.936 "dif_insert_or_strip": false, 00:15:39.936 "zcopy": false, 00:15:39.936 "c2h_success": false, 00:15:39.936 "sock_priority": 0, 00:15:39.936 "abort_timeout_sec": 1, 00:15:39.936 "ack_timeout": 0, 00:15:39.936 "data_wr_pool_size": 0 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_create_subsystem", 00:15:39.936 "params": { 00:15:39.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.936 "allow_any_host": false, 00:15:39.936 "serial_number": "SPDK00000000000001", 00:15:39.936 "model_number": "SPDK bdev Controller", 00:15:39.936 "max_namespaces": 10, 00:15:39.936 "min_cntlid": 1, 00:15:39.936 "max_cntlid": 65519, 00:15:39.936 "ana_reporting": false 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_subsystem_add_host", 00:15:39.936 "params": { 00:15:39.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.936 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.936 "psk": "key0" 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_subsystem_add_ns", 00:15:39.936 "params": { 00:15:39.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.936 "namespace": { 00:15:39.936 "nsid": 1, 00:15:39.936 "bdev_name": "malloc0", 00:15:39.936 "nguid": "6821A463EEEC4A2295A3E2E9A34FD313", 00:15:39.936 "uuid": "6821a463-eeec-4a22-95a3-e2e9a34fd313", 00:15:39.936 "no_auto_visible": false 00:15:39.936 } 00:15:39.936 } 00:15:39.936 }, 00:15:39.936 { 00:15:39.936 "method": "nvmf_subsystem_add_listener", 00:15:39.936 "params": { 00:15:39.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.936 "listen_address": { 00:15:39.936 "trtype": "TCP", 00:15:39.936 "adrfam": "IPv4", 00:15:39.936 "traddr": "10.0.0.3", 00:15:39.936 "trsvcid": "4420" 00:15:39.936 }, 00:15:39.936 "secure_channel": true 00:15:39.936 } 00:15:39.936 } 00:15:39.936 ] 00:15:39.936 } 00:15:39.936 ] 00:15:39.936 }' 00:15:39.936 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:40.504 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:40.504 "subsystems": [ 00:15:40.504 { 00:15:40.504 "subsystem": "keyring", 00:15:40.504 "config": [ 00:15:40.504 { 00:15:40.504 "method": "keyring_file_add_key", 00:15:40.504 "params": { 00:15:40.504 "name": "key0", 00:15:40.504 "path": "/tmp/tmp.Ie9mA82GKo" 00:15:40.504 } 00:15:40.504 } 00:15:40.504 ] 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "subsystem": "iobuf", 00:15:40.504 "config": [ 00:15:40.504 { 00:15:40.504 "method": "iobuf_set_options", 00:15:40.504 "params": { 00:15:40.504 "small_pool_count": 8192, 00:15:40.504 "large_pool_count": 1024, 00:15:40.504 "small_bufsize": 8192, 00:15:40.504 "large_bufsize": 135168 00:15:40.504 } 00:15:40.504 } 00:15:40.504 ] 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "subsystem": "sock", 00:15:40.504 "config": [ 00:15:40.504 { 00:15:40.504 "method": "sock_set_default_impl", 00:15:40.504 "params": { 00:15:40.504 "impl_name": "uring" 00:15:40.504 } 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "method": "sock_impl_set_options", 00:15:40.504 "params": { 00:15:40.504 "impl_name": "ssl", 00:15:40.504 "recv_buf_size": 4096, 00:15:40.504 "send_buf_size": 4096, 00:15:40.504 "enable_recv_pipe": true, 00:15:40.504 "enable_quickack": false, 00:15:40.504 "enable_placement_id": 0, 00:15:40.504 "enable_zerocopy_send_server": true, 00:15:40.504 "enable_zerocopy_send_client": false, 00:15:40.504 "zerocopy_threshold": 0, 00:15:40.504 "tls_version": 0, 00:15:40.504 "enable_ktls": false 00:15:40.504 } 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "method": "sock_impl_set_options", 00:15:40.504 "params": { 00:15:40.504 "impl_name": "posix", 00:15:40.504 "recv_buf_size": 2097152, 00:15:40.504 "send_buf_size": 2097152, 00:15:40.504 "enable_recv_pipe": true, 00:15:40.504 "enable_quickack": false, 00:15:40.504 "enable_placement_id": 0, 00:15:40.504 "enable_zerocopy_send_server": true, 00:15:40.504 "enable_zerocopy_send_client": false, 00:15:40.504 "zerocopy_threshold": 0, 00:15:40.504 "tls_version": 0, 00:15:40.504 "enable_ktls": false 00:15:40.504 } 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "method": "sock_impl_set_options", 00:15:40.504 "params": { 00:15:40.504 "impl_name": "uring", 00:15:40.504 "recv_buf_size": 2097152, 00:15:40.504 "send_buf_size": 2097152, 00:15:40.504 "enable_recv_pipe": true, 00:15:40.504 "enable_quickack": false, 00:15:40.504 "enable_placement_id": 0, 00:15:40.504 "enable_zerocopy_send_server": false, 00:15:40.504 "enable_zerocopy_send_client": false, 00:15:40.504 "zerocopy_threshold": 0, 00:15:40.504 "tls_version": 0, 00:15:40.504 "enable_ktls": false 00:15:40.504 } 00:15:40.504 } 00:15:40.504 ] 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "subsystem": "vmd", 00:15:40.504 "config": [] 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "subsystem": "accel", 00:15:40.504 "config": [ 00:15:40.504 { 00:15:40.504 "method": "accel_set_options", 00:15:40.504 "params": { 00:15:40.504 "small_cache_size": 128, 00:15:40.504 "large_cache_size": 16, 00:15:40.504 "task_count": 2048, 00:15:40.504 "sequence_count": 2048, 00:15:40.504 "buf_count": 2048 00:15:40.504 } 00:15:40.504 } 00:15:40.504 ] 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "subsystem": "bdev", 00:15:40.504 "config": [ 00:15:40.504 { 00:15:40.504 "method": "bdev_set_options", 00:15:40.504 "params": { 00:15:40.504 "bdev_io_pool_size": 65535, 00:15:40.504 "bdev_io_cache_size": 256, 00:15:40.504 "bdev_auto_examine": true, 00:15:40.504 "iobuf_small_cache_size": 128, 00:15:40.504 "iobuf_large_cache_size": 16 00:15:40.504 } 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "method": "bdev_raid_set_options", 00:15:40.504 "params": { 00:15:40.504 "process_window_size_kb": 1024, 00:15:40.504 "process_max_bandwidth_mb_sec": 0 00:15:40.504 } 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "method": "bdev_iscsi_set_options", 00:15:40.504 "params": { 00:15:40.504 "timeout_sec": 30 00:15:40.504 } 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "method": "bdev_nvme_set_options", 00:15:40.504 "params": { 00:15:40.504 "action_on_timeout": "none", 00:15:40.504 "timeout_us": 0, 00:15:40.504 "timeout_admin_us": 0, 00:15:40.504 "keep_alive_timeout_ms": 10000, 00:15:40.504 "arbitration_burst": 0, 00:15:40.504 "low_priority_weight": 0, 00:15:40.504 "medium_priority_weight": 0, 00:15:40.504 "high_priority_weight": 0, 00:15:40.504 "nvme_adminq_poll_period_us": 10000, 00:15:40.505 "nvme_ioq_poll_period_us": 0, 00:15:40.505 "io_queue_requests": 512, 00:15:40.505 "delay_cmd_submit": true, 00:15:40.505 "transport_retry_count": 4, 00:15:40.505 "bdev_retry_count": 3, 00:15:40.505 "transport_ack_timeout": 0, 00:15:40.505 "ctrlr_loss_timeout_sec": 0, 00:15:40.505 "reconnect_delay_sec": 0, 00:15:40.505 "fast_io_fail_timeout_sec": 0, 00:15:40.505 "disable_auto_failback": false, 00:15:40.505 "generate_uuids": false, 00:15:40.505 "transport_tos": 0, 00:15:40.505 "nvme_error_stat": false, 00:15:40.505 "rdma_srq_size": 0, 00:15:40.505 "io_path_stat": false, 00:15:40.505 "allow_accel_sequence": false, 00:15:40.505 "rdma_max_cq_size": 0, 00:15:40.505 "rdma_cm_event_timeout_ms": 0, 00:15:40.505 "dhchap_digests": [ 00:15:40.505 "sha256", 00:15:40.505 "sha384", 00:15:40.505 "sha512" 00:15:40.505 ], 00:15:40.505 "dhchap_dhgroups": [ 00:15:40.505 "null", 00:15:40.505 "ffdhe2048", 00:15:40.505 "ffdhe3072", 00:15:40.505 "ffdhe4096", 00:15:40.505 "ffdhe6144", 00:15:40.505 "ffdhe8192" 00:15:40.505 ] 00:15:40.505 } 00:15:40.505 }, 00:15:40.505 { 00:15:40.505 "method": "bdev_nvme_attach_controller", 00:15:40.505 "params": { 00:15:40.505 "name": "TLSTEST", 00:15:40.505 "trtype": "TCP", 00:15:40.505 "adrfam": "IPv4", 00:15:40.505 "traddr": "10.0.0.3", 00:15:40.505 "trsvcid": "4420", 00:15:40.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.505 "prchk_reftag": false, 00:15:40.505 "prchk_guard": false, 00:15:40.505 "ctrlr_loss_timeout_sec": 0, 00:15:40.505 "reconnect_delay_sec": 0, 00:15:40.505 "fast_io_fail_timeout_sec": 0, 00:15:40.505 "psk": "key0", 00:15:40.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.505 "hdgst": false, 00:15:40.505 "ddgst": false 00:15:40.505 } 00:15:40.505 }, 00:15:40.505 { 00:15:40.505 "method": "bdev_nvme_set_hotplug", 00:15:40.505 "params": { 00:15:40.505 "period_us": 100000, 00:15:40.505 "enable": false 00:15:40.505 } 00:15:40.505 }, 00:15:40.505 { 00:15:40.505 "method": "bdev_wait_for_examine" 00:15:40.505 } 00:15:40.505 ] 00:15:40.505 }, 00:15:40.505 { 00:15:40.505 "subsystem": "nbd", 00:15:40.505 "config": [] 00:15:40.505 } 00:15:40.505 ] 00:15:40.505 }' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84789 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84789 ']' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84789 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84789 00:15:40.505 killing process with pid 84789 00:15:40.505 Received shutdown signal, test time was about 10.000000 seconds 00:15:40.505 00:15:40.505 Latency(us) 00:15:40.505 [2024-12-10T10:29:15.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.505 [2024-12-10T10:29:15.732Z] =================================================================================================================== 00:15:40.505 [2024-12-10T10:29:15.732Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84789' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84789 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84789 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84735 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84735 ']' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84735 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.505 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84735 00:15:40.764 killing process with pid 84735 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84735' 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84735 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84735 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.764 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:40.764 "subsystems": [ 00:15:40.764 { 00:15:40.764 "subsystem": "keyring", 00:15:40.764 "config": [ 00:15:40.764 { 00:15:40.764 "method": "keyring_file_add_key", 00:15:40.764 "params": { 00:15:40.764 "name": "key0", 00:15:40.764 "path": "/tmp/tmp.Ie9mA82GKo" 00:15:40.764 } 00:15:40.764 } 00:15:40.764 ] 00:15:40.764 }, 00:15:40.764 { 00:15:40.764 "subsystem": "iobuf", 00:15:40.764 "config": [ 00:15:40.764 { 00:15:40.764 "method": "iobuf_set_options", 00:15:40.764 "params": { 00:15:40.764 "small_pool_count": 8192, 00:15:40.764 "large_pool_count": 1024, 00:15:40.764 "small_bufsize": 8192, 00:15:40.764 "large_bufsize": 135168 00:15:40.764 } 00:15:40.764 } 00:15:40.764 ] 00:15:40.764 }, 00:15:40.764 { 00:15:40.764 "subsystem": "sock", 00:15:40.764 "config": [ 00:15:40.764 { 00:15:40.764 "method": "sock_set_default_impl", 00:15:40.764 "params": { 00:15:40.764 "impl_name": "uring" 00:15:40.764 } 00:15:40.764 }, 00:15:40.764 { 00:15:40.764 "method": "sock_impl_set_options", 00:15:40.764 "params": { 00:15:40.764 "impl_name": "ssl", 00:15:40.764 "recv_buf_size": 4096, 00:15:40.764 "send_buf_size": 4096, 00:15:40.764 "enable_recv_pipe": true, 00:15:40.764 "enable_quickack": false, 00:15:40.764 "enable_placement_id": 0, 00:15:40.764 "enable_zerocopy_send_server": true, 00:15:40.764 "enable_zerocopy_send_client": false, 00:15:40.764 "zerocopy_threshold": 0, 00:15:40.764 "tls_version": 0, 00:15:40.765 "enable_ktls": false 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "sock_impl_set_options", 00:15:40.765 "params": { 00:15:40.765 "impl_name": "posix", 00:15:40.765 "recv_buf_size": 2097152, 00:15:40.765 "send_buf_size": 2097152, 00:15:40.765 "enable_recv_pipe": true, 00:15:40.765 "enable_quickack": false, 00:15:40.765 "enable_placement_id": 0, 00:15:40.765 "enable_zerocopy_send_server": true, 00:15:40.765 "enable_zerocopy_send_client": false, 00:15:40.765 "zerocopy_threshold": 0, 00:15:40.765 "tls_version": 0, 00:15:40.765 "enable_ktls": false 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "sock_impl_set_options", 00:15:40.765 "params": { 00:15:40.765 "impl_name": "uring", 00:15:40.765 "recv_buf_size": 2097152, 00:15:40.765 "send_buf_size": 2097152, 00:15:40.765 "enable_recv_pipe": true, 00:15:40.765 "enable_quickack": false, 00:15:40.765 "enable_placement_id": 0, 00:15:40.765 "enable_zerocopy_send_server": false, 00:15:40.765 "enable_zerocopy_send_client": false, 00:15:40.765 "zerocopy_threshold": 0, 00:15:40.765 "tls_version": 0, 00:15:40.765 "enable_ktls": false 00:15:40.765 } 00:15:40.765 } 00:15:40.765 ] 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "subsystem": "vmd", 00:15:40.765 "config": [] 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "subsystem": "accel", 00:15:40.765 "config": [ 00:15:40.765 { 00:15:40.765 "method": "accel_set_options", 00:15:40.765 "params": { 00:15:40.765 "small_cache_size": 128, 00:15:40.765 "large_cache_size": 16, 00:15:40.765 "task_count": 2048, 00:15:40.765 "sequence_count": 2048, 00:15:40.765 "buf_count": 2048 00:15:40.765 } 00:15:40.765 } 00:15:40.765 ] 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "subsystem": "bdev", 00:15:40.765 "config": [ 00:15:40.765 { 00:15:40.765 "method": "bdev_set_options", 00:15:40.765 "params": { 00:15:40.765 "bdev_io_pool_size": 65535, 00:15:40.765 "bdev_io_cache_size": 256, 00:15:40.765 "bdev_auto_examine": true, 00:15:40.765 "iobuf_small_cache_size": 128, 00:15:40.765 "iobuf_large_cache_size": 16 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "bdev_raid_set_options", 00:15:40.765 "params": { 00:15:40.765 "process_window_size_kb": 1024, 00:15:40.765 "process_max_bandwidth_mb_sec": 0 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "bdev_iscsi_set_options", 00:15:40.765 "params": { 00:15:40.765 "timeout_sec": 30 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "bdev_nvme_set_options", 00:15:40.765 "params": { 00:15:40.765 "action_on_timeout": "none", 00:15:40.765 "timeout_us": 0, 00:15:40.765 "timeout_admin_us": 0, 00:15:40.765 "keep_alive_timeout_ms": 10000, 00:15:40.765 "arbitration_burst": 0, 00:15:40.765 "low_priority_weight": 0, 00:15:40.765 "medium_priority_weight": 0, 00:15:40.765 "high_priority_weight": 0, 00:15:40.765 "nvme_adminq_poll_period_us": 10000, 00:15:40.765 "nvme_ioq_poll_period_us": 0, 00:15:40.765 "io_queue_requests": 0, 00:15:40.765 "delay_cmd_submit": true, 00:15:40.765 "transport_retry_count": 4, 00:15:40.765 "bdev_retry_count": 3, 00:15:40.765 "transport_ack_timeout": 0, 00:15:40.765 "ctrlr_loss_timeout_sec": 0, 00:15:40.765 "reconnect_delay_sec": 0, 00:15:40.765 "fast_io_fail_timeout_sec": 0, 00:15:40.765 "disable_auto_failback": false, 00:15:40.765 "generate_uuids": false, 00:15:40.765 "transport_tos": 0, 00:15:40.765 "nvme_error_stat": false, 00:15:40.765 "rdma_srq_size": 0, 00:15:40.765 "io_path_stat": false, 00:15:40.765 "allow_accel_sequence": false, 00:15:40.765 "rdma_max_cq_size": 0, 00:15:40.765 "rdma_cm_event_timeout_ms": 0, 00:15:40.765 "dhchap_digests": [ 00:15:40.765 "sha256", 00:15:40.765 "sha384", 00:15:40.765 "sha512" 00:15:40.765 ], 00:15:40.765 "dhchap_dhgroups": [ 00:15:40.765 "null", 00:15:40.765 "ffdhe2048", 00:15:40.765 "ffdhe3072", 00:15:40.765 "ffdhe4096", 00:15:40.765 "ffdhe6144", 00:15:40.765 "ffdhe8192" 00:15:40.765 ] 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "bdev_nvme_set_hotplug", 00:15:40.765 "params": { 00:15:40.765 "period_us": 100000, 00:15:40.765 "enable": false 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "bdev_malloc_create", 00:15:40.765 "params": { 00:15:40.765 "name": "malloc0", 00:15:40.765 "num_blocks": 8192, 00:15:40.765 "block_size": 4096, 00:15:40.765 "physical_block_size": 4096, 00:15:40.765 "uuid": "6821a463-eeec-4a22-95a3-e2e9a34fd313", 00:15:40.765 "optimal_io_boundary": 0, 00:15:40.765 "md_size": 0, 00:15:40.765 "dif_type": 0, 00:15:40.765 "dif_is_head_of_md": false, 00:15:40.765 "dif_pi_format": 0 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "bdev_wait_for_examine" 00:15:40.765 } 00:15:40.765 ] 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "subsystem": "nbd", 00:15:40.765 "config": [] 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "subsystem": "scheduler", 00:15:40.765 "config": [ 00:15:40.765 { 00:15:40.765 "method": "framework_set_scheduler", 00:15:40.765 "params": { 00:15:40.765 "name": "static" 00:15:40.765 } 00:15:40.765 } 00:15:40.765 ] 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "subsystem": "nvmf", 00:15:40.765 "config": [ 00:15:40.765 { 00:15:40.765 "method": "nvmf_set_config", 00:15:40.765 "params": { 00:15:40.765 "discovery_filter": "match_any", 00:15:40.765 "admin_cmd_passthru": { 00:15:40.765 "identify_ctrlr": false 00:15:40.765 }, 00:15:40.765 "dhchap_digests": [ 00:15:40.765 "sha256", 00:15:40.765 "sha384", 00:15:40.765 "sha512" 00:15:40.765 ], 00:15:40.765 "dhchap_dhgroups": [ 00:15:40.765 "null", 00:15:40.765 "ffdhe2048", 00:15:40.765 "ffdhe3072", 00:15:40.765 "ffdhe4096", 00:15:40.765 "ffdhe6144", 00:15:40.765 "ffdhe8192" 00:15:40.765 ] 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "nvmf_set_max_subsystems", 00:15:40.765 "params": { 00:15:40.765 "max_subsystems": 1024 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "nvmf_set_crdt", 00:15:40.765 "params": { 00:15:40.765 "crdt1": 0, 00:15:40.765 "crdt2": 0, 00:15:40.765 "crdt3": 0 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "nvmf_create_transport", 00:15:40.765 "params": { 00:15:40.765 "trtype": "TCP", 00:15:40.765 "max_queue_depth": 128, 00:15:40.765 "max_io_qpairs_per_ctrlr": 127, 00:15:40.765 "in_capsule_data_size": 4096, 00:15:40.765 "max_io_size": 131072, 00:15:40.765 "io_unit_size": 131072, 00:15:40.765 "max_aq_depth": 128, 00:15:40.765 "num_shared_buffers": 511, 00:15:40.765 "buf_cache_size": 4294967295, 00:15:40.765 "dif_insert_or_strip": false, 00:15:40.765 "zcopy": false, 00:15:40.765 "c2h_success": false, 00:15:40.765 "sock_priority": 0, 00:15:40.765 "abort_timeout_sec": 1, 00:15:40.765 "ack_timeout": 0, 00:15:40.765 "data_wr_pool_size": 0 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "nvmf_create_subsystem", 00:15:40.765 "params": { 00:15:40.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.765 "allow_any_host": false, 00:15:40.765 "serial_number": "SPDK00000000000001", 00:15:40.765 "model_number": "SPDK bdev Controller", 00:15:40.765 "max_namespaces": 10, 00:15:40.765 "min_cntlid": 1, 00:15:40.765 "max_cntlid": 65519, 00:15:40.765 "ana_reporting": false 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "nvmf_subsystem_add_host", 00:15:40.765 "params": { 00:15:40.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.765 "host": "nqn.2016-06.io.spdk:host1", 00:15:40.765 "psk": "key0" 00:15:40.765 } 00:15:40.765 }, 00:15:40.765 { 00:15:40.765 "method": "nvmf_subsystem_add_ns", 00:15:40.766 "params": { 00:15:40.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.766 "namespace": { 00:15:40.766 "nsid": 1, 00:15:40.766 "bdev_name": "malloc0", 00:15:40.766 "nguid": "6821A463EEEC4A2295A3E2E9A34FD313", 00:15:40.766 "uuid": "6821a463-eeec-4a22-95a3-e2e9a34fd313", 00:15:40.766 "no_auto_visible": false 00:15:40.766 } 00:15:40.766 } 00:15:40.766 }, 00:15:40.766 { 00:15:40.766 "method": "nvmf_subsystem_add_listener", 00:15:40.766 "params": { 00:15:40.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.766 "listen_address": { 00:15:40.766 "trtype": "TCP", 00:15:40.766 "adrfam": "IPv4", 00:15:40.766 "traddr": "10.0.0.3", 00:15:40.766 "trsvcid": "4420" 00:15:40.766 }, 00:15:40.766 "secure_channel": true 00:15:40.766 } 00:15:40.766 } 00:15:40.766 ] 00:15:40.766 } 00:15:40.766 ] 00:15:40.766 }' 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84833 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84833 00:15:40.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84833 ']' 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.766 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.766 [2024-12-10 10:29:15.960826] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:40.766 [2024-12-10 10:29:15.961190] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.025 [2024-12-10 10:29:16.106731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.025 [2024-12-10 10:29:16.146910] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.025 [2024-12-10 10:29:16.147139] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.025 [2024-12-10 10:29:16.147301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.025 [2024-12-10 10:29:16.147446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.025 [2024-12-10 10:29:16.147458] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.025 [2024-12-10 10:29:16.147545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.284 [2024-12-10 10:29:16.292245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:41.284 [2024-12-10 10:29:16.348878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.284 [2024-12-10 10:29:16.386501] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:41.284 [2024-12-10 10:29:16.386753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:41.852 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.852 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:41.852 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:41.852 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.852 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84865 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84865 /var/tmp/bdevperf.sock 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84865 ']' 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.852 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:41.852 "subsystems": [ 00:15:41.852 { 00:15:41.853 "subsystem": "keyring", 00:15:41.853 "config": [ 00:15:41.853 { 00:15:41.853 "method": "keyring_file_add_key", 00:15:41.853 "params": { 00:15:41.853 "name": "key0", 00:15:41.853 "path": "/tmp/tmp.Ie9mA82GKo" 00:15:41.853 } 00:15:41.853 } 00:15:41.853 ] 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "subsystem": "iobuf", 00:15:41.853 "config": [ 00:15:41.853 { 00:15:41.853 "method": "iobuf_set_options", 00:15:41.853 "params": { 00:15:41.853 "small_pool_count": 8192, 00:15:41.853 "large_pool_count": 1024, 00:15:41.853 "small_bufsize": 8192, 00:15:41.853 "large_bufsize": 135168 00:15:41.853 } 00:15:41.853 } 00:15:41.853 ] 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "subsystem": "sock", 00:15:41.853 "config": [ 00:15:41.853 { 00:15:41.853 "method": "sock_set_default_impl", 00:15:41.853 "params": { 00:15:41.853 "impl_name": "uring" 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "sock_impl_set_options", 00:15:41.853 "params": { 00:15:41.853 "impl_name": "ssl", 00:15:41.853 "recv_buf_size": 4096, 00:15:41.853 "send_buf_size": 4096, 00:15:41.853 "enable_recv_pipe": true, 00:15:41.853 "enable_quickack": false, 00:15:41.853 "enable_placement_id": 0, 00:15:41.853 "enable_zerocopy_send_server": true, 00:15:41.853 "enable_zerocopy_send_client": false, 00:15:41.853 "zerocopy_threshold": 0, 00:15:41.853 "tls_version": 0, 00:15:41.853 "enable_ktls": false 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "sock_impl_set_options", 00:15:41.853 "params": { 00:15:41.853 "impl_name": "posix", 00:15:41.853 "recv_buf_size": 2097152, 00:15:41.853 "send_buf_size": 2097152, 00:15:41.853 "enable_recv_pipe": true, 00:15:41.853 "enable_quickack": false, 00:15:41.853 "enable_placement_id": 0, 00:15:41.853 "enable_zerocopy_send_server": true, 00:15:41.853 "enable_zerocopy_send_client": false, 00:15:41.853 "zerocopy_threshold": 0, 00:15:41.853 "tls_version": 0, 00:15:41.853 "enable_ktls": false 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "sock_impl_set_options", 00:15:41.853 "params": { 00:15:41.853 "impl_name": "uring", 00:15:41.853 "recv_buf_size": 2097152, 00:15:41.853 "send_buf_size": 2097152, 00:15:41.853 "enable_recv_pipe": true, 00:15:41.853 "enable_quickack": false, 00:15:41.853 "enable_placement_id": 0, 00:15:41.853 "enable_zerocopy_send_server": false, 00:15:41.853 "enable_zerocopy_send_client": false, 00:15:41.853 "zerocopy_threshold": 0, 00:15:41.853 "tls_version": 0, 00:15:41.853 "enable_ktls": false 00:15:41.853 } 00:15:41.853 } 00:15:41.853 ] 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "subsystem": "vmd", 00:15:41.853 "config": [] 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "subsystem": "accel", 00:15:41.853 "config": [ 00:15:41.853 { 00:15:41.853 "method": "accel_set_options", 00:15:41.853 "params": { 00:15:41.853 "small_cache_size": 128, 00:15:41.853 "large_cache_size": 16, 00:15:41.853 "task_count": 2048, 00:15:41.853 "sequence_count": 2048, 00:15:41.853 "buf_count": 2048 00:15:41.853 } 00:15:41.853 } 00:15:41.853 ] 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "subsystem": "bdev", 00:15:41.853 "config": [ 00:15:41.853 { 00:15:41.853 "method": "bdev_set_options", 00:15:41.853 "params": { 00:15:41.853 "bdev_io_pool_size": 65535, 00:15:41.853 "bdev_io_cache_size": 256, 00:15:41.853 "bdev_auto_examine": true, 00:15:41.853 "iobuf_small_cache_size": 128, 00:15:41.853 "iobuf_large_cache_size": 16 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "bdev_raid_set_options", 00:15:41.853 "params": { 00:15:41.853 "process_window_size_kb": 1024, 00:15:41.853 "process_max_bandwidth_mb_sec": 0 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "bdev_iscsi_set_options", 00:15:41.853 "params": { 00:15:41.853 "timeout_sec": 30 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "bdev_nvme_set_options", 00:15:41.853 "params": { 00:15:41.853 "action_on_timeout": "none", 00:15:41.853 "timeout_us": 0, 00:15:41.853 "timeout_admin_us": 0, 00:15:41.853 "keep_alive_timeout_ms": 10000, 00:15:41.853 "arbitration_burst": 0, 00:15:41.853 "low_priority_weight": 0, 00:15:41.853 "medium_priority_weight": 0, 00:15:41.853 "high_priority_weight": 0, 00:15:41.853 "nvme_adminq_poll_period_us": 10000, 00:15:41.853 "nvme_ioq_poll_period_us": 0, 00:15:41.853 "io_queue_requests": 512, 00:15:41.853 "delay_cmd_submit": true, 00:15:41.853 "transport_retry_count": 4, 00:15:41.853 "bdev_retry_count": 3, 00:15:41.853 "transport_ack_timeout": 0, 00:15:41.853 "ctrlr_loss_timeout_sec": 0, 00:15:41.853 "reconnect_delay_sec": 0, 00:15:41.853 "fast_io_fail_timeout_sec": 0, 00:15:41.853 "disable_auto_failback": false, 00:15:41.853 "generate_uuids": false, 00:15:41.853 "transport_tos": 0, 00:15:41.853 "nvme_error_stat": false, 00:15:41.853 "rdma_srq_size": 0, 00:15:41.853 "io_path_stat": false, 00:15:41.853 "allow_accel_sequence": false, 00:15:41.853 "rdma_max_cq_size": 0, 00:15:41.853 "rdma_cm_event_timeout_ms": 0, 00:15:41.853 "dhchap_digests": [ 00:15:41.853 "sha256", 00:15:41.853 "sha384", 00:15:41.853 "sha512" 00:15:41.853 ], 00:15:41.853 "dhchap_dhgroups": [ 00:15:41.853 "null", 00:15:41.853 "ffdhe2048", 00:15:41.853 "ffdhe3072", 00:15:41.853 "ffdhe4096", 00:15:41.853 "ffdhe6144", 00:15:41.853 "ffdhe8192" 00:15:41.853 ] 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "bdev_nvme_attach_controller", 00:15:41.853 "params": { 00:15:41.853 "name": "TLSTEST", 00:15:41.853 "trtype": "TCP", 00:15:41.853 "adrfam": "IPv4", 00:15:41.853 "traddr": "10.0.0.3", 00:15:41.853 "trsvcid": "4420", 00:15:41.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.853 "prchk_reftag": false, 00:15:41.853 "prchk_guard": false, 00:15:41.853 "ctrlr_loss_timeout_sec": 0, 00:15:41.853 "reconnect_delay_sec": 0, 00:15:41.853 "fast_io_fail_timeout_sec": 0, 00:15:41.853 "psk": "key0", 00:15:41.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.853 "hdgst": false, 00:15:41.853 "ddgst": false 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "bdev_nvme_set_hotplug", 00:15:41.853 "params": { 00:15:41.853 "period_us": 100000, 00:15:41.853 "enable": false 00:15:41.853 } 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "method": "bdev_wait_for_examine" 00:15:41.853 } 00:15:41.853 ] 00:15:41.853 }, 00:15:41.853 { 00:15:41.853 "subsystem": "nbd", 00:15:41.853 "config": [] 00:15:41.853 } 00:15:41.853 ] 00:15:41.853 }' 00:15:42.113 [2024-12-10 10:29:17.092298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:42.113 [2024-12-10 10:29:17.092699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84865 ] 00:15:42.113 [2024-12-10 10:29:17.231980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.113 [2024-12-10 10:29:17.276988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.372 [2024-12-10 10:29:17.393896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.372 [2024-12-10 10:29:17.426577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:42.937 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.937 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:42.937 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:43.196 Running I/O for 10 seconds... 00:15:45.070 3920.00 IOPS, 15.31 MiB/s [2024-12-10T10:29:21.675Z] 3967.50 IOPS, 15.50 MiB/s [2024-12-10T10:29:22.242Z] 3979.00 IOPS, 15.54 MiB/s [2024-12-10T10:29:23.668Z] 3986.25 IOPS, 15.57 MiB/s [2024-12-10T10:29:24.604Z] 3993.00 IOPS, 15.60 MiB/s [2024-12-10T10:29:25.540Z] 4002.17 IOPS, 15.63 MiB/s [2024-12-10T10:29:26.477Z] 4002.71 IOPS, 15.64 MiB/s [2024-12-10T10:29:27.412Z] 4004.12 IOPS, 15.64 MiB/s [2024-12-10T10:29:28.349Z] 4005.22 IOPS, 15.65 MiB/s [2024-12-10T10:29:28.349Z] 4005.50 IOPS, 15.65 MiB/s 00:15:53.122 Latency(us) 00:15:53.122 [2024-12-10T10:29:28.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:53.122 Verification LBA range: start 0x0 length 0x2000 00:15:53.122 TLSTESTn1 : 10.02 4011.97 15.67 0.00 0.00 31848.90 5153.51 24307.90 00:15:53.122 [2024-12-10T10:29:28.349Z] =================================================================================================================== 00:15:53.122 [2024-12-10T10:29:28.349Z] Total : 4011.97 15.67 0.00 0.00 31848.90 5153.51 24307.90 00:15:53.122 { 00:15:53.122 "results": [ 00:15:53.122 { 00:15:53.122 "job": "TLSTESTn1", 00:15:53.122 "core_mask": "0x4", 00:15:53.122 "workload": "verify", 00:15:53.122 "status": "finished", 00:15:53.122 "verify_range": { 00:15:53.122 "start": 0, 00:15:53.122 "length": 8192 00:15:53.122 }, 00:15:53.122 "queue_depth": 128, 00:15:53.122 "io_size": 4096, 00:15:53.122 "runtime": 10.015271, 00:15:53.122 "iops": 4011.973315549824, 00:15:53.122 "mibps": 15.6717707638665, 00:15:53.122 "io_failed": 0, 00:15:53.122 "io_timeout": 0, 00:15:53.122 "avg_latency_us": 31848.903754148843, 00:15:53.122 "min_latency_us": 5153.512727272728, 00:15:53.122 "max_latency_us": 24307.898181818182 00:15:53.122 } 00:15:53.122 ], 00:15:53.122 "core_count": 1 00:15:53.122 } 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84865 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84865 ']' 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84865 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84865 00:15:53.122 killing process with pid 84865 00:15:53.122 Received shutdown signal, test time was about 10.000000 seconds 00:15:53.122 00:15:53.122 Latency(us) 00:15:53.122 [2024-12-10T10:29:28.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.122 [2024-12-10T10:29:28.349Z] =================================================================================================================== 00:15:53.122 [2024-12-10T10:29:28.349Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84865' 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84865 00:15:53.122 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84865 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84833 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84833 ']' 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84833 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84833 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.382 killing process with pid 84833 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84833' 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84833 00:15:53.382 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84833 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85013 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85013 00:15:53.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85013 ']' 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.641 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.641 [2024-12-10 10:29:28.738680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:53.641 [2024-12-10 10:29:28.739059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.900 [2024-12-10 10:29:28.883472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.901 [2024-12-10 10:29:28.925701] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.901 [2024-12-10 10:29:28.925765] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.901 [2024-12-10 10:29:28.925788] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.901 [2024-12-10 10:29:28.925808] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.901 [2024-12-10 10:29:28.925817] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.901 [2024-12-10 10:29:28.925847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.901 [2024-12-10 10:29:28.960177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Ie9mA82GKo 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ie9mA82GKo 00:15:54.836 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:54.836 [2024-12-10 10:29:29.999677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.836 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:55.403 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:55.403 [2024-12-10 10:29:30.603910] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:55.403 [2024-12-10 10:29:30.604315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.403 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:55.970 malloc0 00:15:55.970 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:55.970 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:56.229 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=85069 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 85069 /var/tmp/bdevperf.sock 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85069 ']' 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.488 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.488 [2024-12-10 10:29:31.664347] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:56.488 [2024-12-10 10:29:31.664701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85069 ] 00:15:56.747 [2024-12-10 10:29:31.807143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.747 [2024-12-10 10:29:31.849649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.747 [2024-12-10 10:29:31.883453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.683 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.683 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:57.683 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:15:57.683 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:57.942 [2024-12-10 10:29:32.981723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:57.942 nvme0n1 00:15:57.942 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.200 Running I/O for 1 seconds... 00:15:59.138 4605.00 IOPS, 17.99 MiB/s 00:15:59.138 Latency(us) 00:15:59.138 [2024-12-10T10:29:34.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.138 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:59.138 Verification LBA range: start 0x0 length 0x2000 00:15:59.138 nvme0n1 : 1.03 4595.45 17.95 0.00 0.00 27568.25 9055.88 19541.64 00:15:59.138 [2024-12-10T10:29:34.365Z] =================================================================================================================== 00:15:59.138 [2024-12-10T10:29:34.365Z] Total : 4595.45 17.95 0.00 0.00 27568.25 9055.88 19541.64 00:15:59.138 { 00:15:59.138 "results": [ 00:15:59.138 { 00:15:59.138 "job": "nvme0n1", 00:15:59.138 "core_mask": "0x2", 00:15:59.138 "workload": "verify", 00:15:59.138 "status": "finished", 00:15:59.138 "verify_range": { 00:15:59.138 "start": 0, 00:15:59.138 "length": 8192 00:15:59.138 }, 00:15:59.138 "queue_depth": 128, 00:15:59.138 "io_size": 4096, 00:15:59.138 "runtime": 1.03015, 00:15:59.138 "iops": 4595.447264961414, 00:15:59.138 "mibps": 17.95096587875552, 00:15:59.138 "io_failed": 0, 00:15:59.138 "io_timeout": 0, 00:15:59.138 "avg_latency_us": 27568.249368206783, 00:15:59.138 "min_latency_us": 9055.883636363636, 00:15:59.138 "max_latency_us": 19541.643636363635 00:15:59.138 } 00:15:59.138 ], 00:15:59.138 "core_count": 1 00:15:59.138 } 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 85069 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85069 ']' 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85069 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85069 00:15:59.138 killing process with pid 85069 00:15:59.138 Received shutdown signal, test time was about 1.000000 seconds 00:15:59.138 00:15:59.138 Latency(us) 00:15:59.138 [2024-12-10T10:29:34.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.138 [2024-12-10T10:29:34.365Z] =================================================================================================================== 00:15:59.138 [2024-12-10T10:29:34.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85069' 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85069 00:15:59.138 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85069 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85013 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85013 ']' 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85013 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85013 00:15:59.397 killing process with pid 85013 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85013' 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85013 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85013 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85120 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85120 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85120 ']' 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.397 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.656 [2024-12-10 10:29:34.677970] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:59.656 [2024-12-10 10:29:34.678080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.656 [2024-12-10 10:29:34.819416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.656 [2024-12-10 10:29:34.850238] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.656 [2024-12-10 10:29:34.850291] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.656 [2024-12-10 10:29:34.850301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.657 [2024-12-10 10:29:34.850307] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.657 [2024-12-10 10:29:34.850313] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.657 [2024-12-10 10:29:34.850340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.657 [2024-12-10 10:29:34.875683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.593 [2024-12-10 10:29:35.658947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.593 malloc0 00:16:00.593 [2024-12-10 10:29:35.694692] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:00.593 [2024-12-10 10:29:35.695018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=85152 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 85152 /var/tmp/bdevperf.sock 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85152 ']' 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.593 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.593 [2024-12-10 10:29:35.774153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:00.593 [2024-12-10 10:29:35.774422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85152 ] 00:16:00.852 [2024-12-10 10:29:35.905138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.852 [2024-12-10 10:29:35.940165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.852 [2024-12-10 10:29:35.969545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.852 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.852 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:00.852 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ie9mA82GKo 00:16:01.109 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:01.367 [2024-12-10 10:29:36.485288] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:01.367 nvme0n1 00:16:01.367 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:01.630 Running I/O for 1 seconds... 00:16:02.603 4756.00 IOPS, 18.58 MiB/s 00:16:02.603 Latency(us) 00:16:02.603 [2024-12-10T10:29:37.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.603 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:02.603 Verification LBA range: start 0x0 length 0x2000 00:16:02.603 nvme0n1 : 1.01 4818.30 18.82 0.00 0.00 26363.50 4587.52 20018.27 00:16:02.603 [2024-12-10T10:29:37.830Z] =================================================================================================================== 00:16:02.603 [2024-12-10T10:29:37.830Z] Total : 4818.30 18.82 0.00 0.00 26363.50 4587.52 20018.27 00:16:02.603 { 00:16:02.603 "results": [ 00:16:02.603 { 00:16:02.603 "job": "nvme0n1", 00:16:02.603 "core_mask": "0x2", 00:16:02.603 "workload": "verify", 00:16:02.603 "status": "finished", 00:16:02.603 "verify_range": { 00:16:02.603 "start": 0, 00:16:02.603 "length": 8192 00:16:02.603 }, 00:16:02.603 "queue_depth": 128, 00:16:02.603 "io_size": 4096, 00:16:02.603 "runtime": 1.013635, 00:16:02.603 "iops": 4818.3024461467885, 00:16:02.603 "mibps": 18.821493930260893, 00:16:02.603 "io_failed": 0, 00:16:02.603 "io_timeout": 0, 00:16:02.603 "avg_latency_us": 26363.50403692949, 00:16:02.603 "min_latency_us": 4587.52, 00:16:02.603 "max_latency_us": 20018.269090909092 00:16:02.603 } 00:16:02.603 ], 00:16:02.603 "core_count": 1 00:16:02.603 } 00:16:02.603 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:02.603 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.603 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.862 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.862 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:02.862 "subsystems": [ 00:16:02.862 { 00:16:02.862 "subsystem": "keyring", 00:16:02.862 "config": [ 00:16:02.862 { 00:16:02.862 "method": "keyring_file_add_key", 00:16:02.862 "params": { 00:16:02.862 "name": "key0", 00:16:02.862 "path": "/tmp/tmp.Ie9mA82GKo" 00:16:02.862 } 00:16:02.862 } 00:16:02.862 ] 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "subsystem": "iobuf", 00:16:02.862 "config": [ 00:16:02.862 { 00:16:02.862 "method": "iobuf_set_options", 00:16:02.862 "params": { 00:16:02.862 "small_pool_count": 8192, 00:16:02.862 "large_pool_count": 1024, 00:16:02.862 "small_bufsize": 8192, 00:16:02.862 "large_bufsize": 135168 00:16:02.862 } 00:16:02.862 } 00:16:02.862 ] 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "subsystem": "sock", 00:16:02.862 "config": [ 00:16:02.862 { 00:16:02.862 "method": "sock_set_default_impl", 00:16:02.862 "params": { 00:16:02.862 "impl_name": "uring" 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "sock_impl_set_options", 00:16:02.862 "params": { 00:16:02.862 "impl_name": "ssl", 00:16:02.862 "recv_buf_size": 4096, 00:16:02.862 "send_buf_size": 4096, 00:16:02.862 "enable_recv_pipe": true, 00:16:02.862 "enable_quickack": false, 00:16:02.862 "enable_placement_id": 0, 00:16:02.862 "enable_zerocopy_send_server": true, 00:16:02.862 "enable_zerocopy_send_client": false, 00:16:02.862 "zerocopy_threshold": 0, 00:16:02.862 "tls_version": 0, 00:16:02.862 "enable_ktls": false 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "sock_impl_set_options", 00:16:02.862 "params": { 00:16:02.862 "impl_name": "posix", 00:16:02.862 "recv_buf_size": 2097152, 00:16:02.862 "send_buf_size": 2097152, 00:16:02.862 "enable_recv_pipe": true, 00:16:02.862 "enable_quickack": false, 00:16:02.862 "enable_placement_id": 0, 00:16:02.862 "enable_zerocopy_send_server": true, 00:16:02.862 "enable_zerocopy_send_client": false, 00:16:02.862 "zerocopy_threshold": 0, 00:16:02.862 "tls_version": 0, 00:16:02.862 "enable_ktls": false 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "sock_impl_set_options", 00:16:02.862 "params": { 00:16:02.862 "impl_name": "uring", 00:16:02.862 "recv_buf_size": 2097152, 00:16:02.862 "send_buf_size": 2097152, 00:16:02.862 "enable_recv_pipe": true, 00:16:02.862 "enable_quickack": false, 00:16:02.862 "enable_placement_id": 0, 00:16:02.862 "enable_zerocopy_send_server": false, 00:16:02.862 "enable_zerocopy_send_client": false, 00:16:02.862 "zerocopy_threshold": 0, 00:16:02.862 "tls_version": 0, 00:16:02.862 "enable_ktls": false 00:16:02.862 } 00:16:02.862 } 00:16:02.862 ] 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "subsystem": "vmd", 00:16:02.862 "config": [] 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "subsystem": "accel", 00:16:02.862 "config": [ 00:16:02.862 { 00:16:02.862 "method": "accel_set_options", 00:16:02.862 "params": { 00:16:02.862 "small_cache_size": 128, 00:16:02.862 "large_cache_size": 16, 00:16:02.862 "task_count": 2048, 00:16:02.862 "sequence_count": 2048, 00:16:02.862 "buf_count": 2048 00:16:02.862 } 00:16:02.862 } 00:16:02.862 ] 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "subsystem": "bdev", 00:16:02.862 "config": [ 00:16:02.862 { 00:16:02.862 "method": "bdev_set_options", 00:16:02.862 "params": { 00:16:02.862 "bdev_io_pool_size": 65535, 00:16:02.862 "bdev_io_cache_size": 256, 00:16:02.862 "bdev_auto_examine": true, 00:16:02.862 "iobuf_small_cache_size": 128, 00:16:02.862 "iobuf_large_cache_size": 16 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "bdev_raid_set_options", 00:16:02.862 "params": { 00:16:02.862 "process_window_size_kb": 1024, 00:16:02.862 "process_max_bandwidth_mb_sec": 0 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "bdev_iscsi_set_options", 00:16:02.862 "params": { 00:16:02.862 "timeout_sec": 30 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "bdev_nvme_set_options", 00:16:02.862 "params": { 00:16:02.862 "action_on_timeout": "none", 00:16:02.862 "timeout_us": 0, 00:16:02.862 "timeout_admin_us": 0, 00:16:02.862 "keep_alive_timeout_ms": 10000, 00:16:02.862 "arbitration_burst": 0, 00:16:02.862 "low_priority_weight": 0, 00:16:02.862 "medium_priority_weight": 0, 00:16:02.862 "high_priority_weight": 0, 00:16:02.862 "nvme_adminq_poll_period_us": 10000, 00:16:02.862 "nvme_ioq_poll_period_us": 0, 00:16:02.862 "io_queue_requests": 0, 00:16:02.862 "delay_cmd_submit": true, 00:16:02.862 "transport_retry_count": 4, 00:16:02.862 "bdev_retry_count": 3, 00:16:02.862 "transport_ack_timeout": 0, 00:16:02.862 "ctrlr_loss_timeout_sec": 0, 00:16:02.862 "reconnect_delay_sec": 0, 00:16:02.862 "fast_io_fail_timeout_sec": 0, 00:16:02.862 "disable_auto_failback": false, 00:16:02.862 "generate_uuids": false, 00:16:02.862 "transport_tos": 0, 00:16:02.862 "nvme_error_stat": false, 00:16:02.862 "rdma_srq_size": 0, 00:16:02.862 "io_path_stat": false, 00:16:02.862 "allow_accel_sequence": false, 00:16:02.862 "rdma_max_cq_size": 0, 00:16:02.862 "rdma_cm_event_timeout_ms": 0, 00:16:02.862 "dhchap_digests": [ 00:16:02.862 "sha256", 00:16:02.862 "sha384", 00:16:02.862 "sha512" 00:16:02.862 ], 00:16:02.862 "dhchap_dhgroups": [ 00:16:02.862 "null", 00:16:02.862 "ffdhe2048", 00:16:02.862 "ffdhe3072", 00:16:02.862 "ffdhe4096", 00:16:02.862 "ffdhe6144", 00:16:02.862 "ffdhe8192" 00:16:02.862 ] 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "bdev_nvme_set_hotplug", 00:16:02.862 "params": { 00:16:02.862 "period_us": 100000, 00:16:02.862 "enable": false 00:16:02.862 } 00:16:02.862 }, 00:16:02.862 { 00:16:02.862 "method": "bdev_malloc_create", 00:16:02.862 "params": { 00:16:02.862 "name": "malloc0", 00:16:02.862 "num_blocks": 8192, 00:16:02.862 "block_size": 4096, 00:16:02.862 "physical_block_size": 4096, 00:16:02.862 "uuid": "ae991623-dc4a-43f6-8788-ba0df12578c4", 00:16:02.862 "optimal_io_boundary": 0, 00:16:02.862 "md_size": 0, 00:16:02.862 "dif_type": 0, 00:16:02.863 "dif_is_head_of_md": false, 00:16:02.863 "dif_pi_format": 0 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "bdev_wait_for_examine" 00:16:02.863 } 00:16:02.863 ] 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "subsystem": "nbd", 00:16:02.863 "config": [] 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "subsystem": "scheduler", 00:16:02.863 "config": [ 00:16:02.863 { 00:16:02.863 "method": "framework_set_scheduler", 00:16:02.863 "params": { 00:16:02.863 "name": "static" 00:16:02.863 } 00:16:02.863 } 00:16:02.863 ] 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "subsystem": "nvmf", 00:16:02.863 "config": [ 00:16:02.863 { 00:16:02.863 "method": "nvmf_set_config", 00:16:02.863 "params": { 00:16:02.863 "discovery_filter": "match_any", 00:16:02.863 "admin_cmd_passthru": { 00:16:02.863 "identify_ctrlr": false 00:16:02.863 }, 00:16:02.863 "dhchap_digests": [ 00:16:02.863 "sha256", 00:16:02.863 "sha384", 00:16:02.863 "sha512" 00:16:02.863 ], 00:16:02.863 "dhchap_dhgroups": [ 00:16:02.863 "null", 00:16:02.863 "ffdhe2048", 00:16:02.863 "ffdhe3072", 00:16:02.863 "ffdhe4096", 00:16:02.863 "ffdhe6144", 00:16:02.863 "ffdhe8192" 00:16:02.863 ] 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_set_max_subsystems", 00:16:02.863 "params": { 00:16:02.863 "max_subsystems": 1024 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_set_crdt", 00:16:02.863 "params": { 00:16:02.863 "crdt1": 0, 00:16:02.863 "crdt2": 0, 00:16:02.863 "crdt3": 0 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_create_transport", 00:16:02.863 "params": { 00:16:02.863 "trtype": "TCP", 00:16:02.863 "max_queue_depth": 128, 00:16:02.863 "max_io_qpairs_per_ctrlr": 127, 00:16:02.863 "in_capsule_data_size": 4096, 00:16:02.863 "max_io_size": 131072, 00:16:02.863 "io_unit_size": 131072, 00:16:02.863 "max_aq_depth": 128, 00:16:02.863 "num_shared_buffers": 511, 00:16:02.863 "buf_cache_size": 4294967295, 00:16:02.863 "dif_insert_or_strip": false, 00:16:02.863 "zcopy": false, 00:16:02.863 "c2h_success": false, 00:16:02.863 "sock_priority": 0, 00:16:02.863 "abort_timeout_sec": 1, 00:16:02.863 "ack_timeout": 0, 00:16:02.863 "data_wr_pool_size": 0 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_create_subsystem", 00:16:02.863 "params": { 00:16:02.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.863 "allow_any_host": false, 00:16:02.863 "serial_number": "00000000000000000000", 00:16:02.863 "model_number": "SPDK bdev Controller", 00:16:02.863 "max_namespaces": 32, 00:16:02.863 "min_cntlid": 1, 00:16:02.863 "max_cntlid": 65519, 00:16:02.863 "ana_reporting": false 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_subsystem_add_host", 00:16:02.863 "params": { 00:16:02.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.863 "host": "nqn.2016-06.io.spdk:host1", 00:16:02.863 "psk": "key0" 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_subsystem_add_ns", 00:16:02.863 "params": { 00:16:02.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.863 "namespace": { 00:16:02.863 "nsid": 1, 00:16:02.863 "bdev_name": "malloc0", 00:16:02.863 "nguid": "AE991623DC4A43F68788BA0DF12578C4", 00:16:02.863 "uuid": "ae991623-dc4a-43f6-8788-ba0df12578c4", 00:16:02.863 "no_auto_visible": false 00:16:02.863 } 00:16:02.863 } 00:16:02.863 }, 00:16:02.863 { 00:16:02.863 "method": "nvmf_subsystem_add_listener", 00:16:02.863 "params": { 00:16:02.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.863 "listen_address": { 00:16:02.863 "trtype": "TCP", 00:16:02.863 "adrfam": "IPv4", 00:16:02.863 "traddr": "10.0.0.3", 00:16:02.863 "trsvcid": "4420" 00:16:02.863 }, 00:16:02.863 "secure_channel": false, 00:16:02.863 "sock_impl": "ssl" 00:16:02.863 } 00:16:02.863 } 00:16:02.863 ] 00:16:02.863 } 00:16:02.863 ] 00:16:02.863 }' 00:16:02.863 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:03.123 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:03.123 "subsystems": [ 00:16:03.123 { 00:16:03.123 "subsystem": "keyring", 00:16:03.123 "config": [ 00:16:03.123 { 00:16:03.123 "method": "keyring_file_add_key", 00:16:03.123 "params": { 00:16:03.123 "name": "key0", 00:16:03.123 "path": "/tmp/tmp.Ie9mA82GKo" 00:16:03.123 } 00:16:03.123 } 00:16:03.123 ] 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "subsystem": "iobuf", 00:16:03.123 "config": [ 00:16:03.123 { 00:16:03.123 "method": "iobuf_set_options", 00:16:03.123 "params": { 00:16:03.123 "small_pool_count": 8192, 00:16:03.123 "large_pool_count": 1024, 00:16:03.123 "small_bufsize": 8192, 00:16:03.123 "large_bufsize": 135168 00:16:03.123 } 00:16:03.123 } 00:16:03.123 ] 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "subsystem": "sock", 00:16:03.123 "config": [ 00:16:03.123 { 00:16:03.123 "method": "sock_set_default_impl", 00:16:03.123 "params": { 00:16:03.123 "impl_name": "uring" 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "sock_impl_set_options", 00:16:03.123 "params": { 00:16:03.123 "impl_name": "ssl", 00:16:03.123 "recv_buf_size": 4096, 00:16:03.123 "send_buf_size": 4096, 00:16:03.123 "enable_recv_pipe": true, 00:16:03.123 "enable_quickack": false, 00:16:03.123 "enable_placement_id": 0, 00:16:03.123 "enable_zerocopy_send_server": true, 00:16:03.123 "enable_zerocopy_send_client": false, 00:16:03.123 "zerocopy_threshold": 0, 00:16:03.123 "tls_version": 0, 00:16:03.123 "enable_ktls": false 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "sock_impl_set_options", 00:16:03.123 "params": { 00:16:03.123 "impl_name": "posix", 00:16:03.123 "recv_buf_size": 2097152, 00:16:03.123 "send_buf_size": 2097152, 00:16:03.123 "enable_recv_pipe": true, 00:16:03.123 "enable_quickack": false, 00:16:03.123 "enable_placement_id": 0, 00:16:03.123 "enable_zerocopy_send_server": true, 00:16:03.123 "enable_zerocopy_send_client": false, 00:16:03.123 "zerocopy_threshold": 0, 00:16:03.123 "tls_version": 0, 00:16:03.123 "enable_ktls": false 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "sock_impl_set_options", 00:16:03.123 "params": { 00:16:03.123 "impl_name": "uring", 00:16:03.123 "recv_buf_size": 2097152, 00:16:03.123 "send_buf_size": 2097152, 00:16:03.123 "enable_recv_pipe": true, 00:16:03.123 "enable_quickack": false, 00:16:03.123 "enable_placement_id": 0, 00:16:03.123 "enable_zerocopy_send_server": false, 00:16:03.123 "enable_zerocopy_send_client": false, 00:16:03.123 "zerocopy_threshold": 0, 00:16:03.123 "tls_version": 0, 00:16:03.123 "enable_ktls": false 00:16:03.123 } 00:16:03.123 } 00:16:03.123 ] 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "subsystem": "vmd", 00:16:03.123 "config": [] 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "subsystem": "accel", 00:16:03.123 "config": [ 00:16:03.123 { 00:16:03.123 "method": "accel_set_options", 00:16:03.123 "params": { 00:16:03.123 "small_cache_size": 128, 00:16:03.123 "large_cache_size": 16, 00:16:03.123 "task_count": 2048, 00:16:03.123 "sequence_count": 2048, 00:16:03.123 "buf_count": 2048 00:16:03.123 } 00:16:03.123 } 00:16:03.123 ] 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "subsystem": "bdev", 00:16:03.123 "config": [ 00:16:03.123 { 00:16:03.123 "method": "bdev_set_options", 00:16:03.123 "params": { 00:16:03.123 "bdev_io_pool_size": 65535, 00:16:03.123 "bdev_io_cache_size": 256, 00:16:03.123 "bdev_auto_examine": true, 00:16:03.123 "iobuf_small_cache_size": 128, 00:16:03.123 "iobuf_large_cache_size": 16 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_raid_set_options", 00:16:03.123 "params": { 00:16:03.123 "process_window_size_kb": 1024, 00:16:03.123 "process_max_bandwidth_mb_sec": 0 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_iscsi_set_options", 00:16:03.123 "params": { 00:16:03.123 "timeout_sec": 30 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_nvme_set_options", 00:16:03.123 "params": { 00:16:03.123 "action_on_timeout": "none", 00:16:03.123 "timeout_us": 0, 00:16:03.123 "timeout_admin_us": 0, 00:16:03.123 "keep_alive_timeout_ms": 10000, 00:16:03.123 "arbitration_burst": 0, 00:16:03.123 "low_priority_weight": 0, 00:16:03.123 "medium_priority_weight": 0, 00:16:03.123 "high_priority_weight": 0, 00:16:03.123 "nvme_adminq_poll_period_us": 10000, 00:16:03.123 "nvme_ioq_poll_period_us": 0, 00:16:03.123 "io_queue_requests": 512, 00:16:03.123 "delay_cmd_submit": true, 00:16:03.123 "transport_retry_count": 4, 00:16:03.123 "bdev_retry_count": 3, 00:16:03.123 "transport_ack_timeout": 0, 00:16:03.123 "ctrlr_loss_timeout_sec": 0, 00:16:03.123 "reconnect_delay_sec": 0, 00:16:03.123 "fast_io_fail_timeout_sec": 0, 00:16:03.123 "disable_auto_failback": false, 00:16:03.123 "generate_uuids": false, 00:16:03.123 "transport_tos": 0, 00:16:03.123 "nvme_error_stat": false, 00:16:03.123 "rdma_srq_size": 0, 00:16:03.123 "io_path_stat": false, 00:16:03.123 "allow_accel_sequence": false, 00:16:03.123 "rdma_max_cq_size": 0, 00:16:03.123 "rdma_cm_event_timeout_ms": 0, 00:16:03.123 "dhchap_digests": [ 00:16:03.123 "sha256", 00:16:03.123 "sha384", 00:16:03.123 "sha512" 00:16:03.123 ], 00:16:03.123 "dhchap_dhgroups": [ 00:16:03.123 "null", 00:16:03.123 "ffdhe2048", 00:16:03.123 "ffdhe3072", 00:16:03.123 "ffdhe4096", 00:16:03.123 "ffdhe6144", 00:16:03.123 "ffdhe8192" 00:16:03.123 ] 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_nvme_attach_controller", 00:16:03.123 "params": { 00:16:03.123 "name": "nvme0", 00:16:03.123 "trtype": "TCP", 00:16:03.123 "adrfam": "IPv4", 00:16:03.123 "traddr": "10.0.0.3", 00:16:03.123 "trsvcid": "4420", 00:16:03.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.123 "prchk_reftag": false, 00:16:03.123 "prchk_guard": false, 00:16:03.123 "ctrlr_loss_timeout_sec": 0, 00:16:03.123 "reconnect_delay_sec": 0, 00:16:03.123 "fast_io_fail_timeout_sec": 0, 00:16:03.123 "psk": "key0", 00:16:03.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.123 "hdgst": false, 00:16:03.123 "ddgst": false 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_nvme_set_hotplug", 00:16:03.123 "params": { 00:16:03.123 "period_us": 100000, 00:16:03.123 "enable": false 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_enable_histogram", 00:16:03.123 "params": { 00:16:03.123 "name": "nvme0n1", 00:16:03.123 "enable": true 00:16:03.123 } 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "method": "bdev_wait_for_examine" 00:16:03.123 } 00:16:03.123 ] 00:16:03.123 }, 00:16:03.123 { 00:16:03.123 "subsystem": "nbd", 00:16:03.124 "config": [] 00:16:03.124 } 00:16:03.124 ] 00:16:03.124 }' 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 85152 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85152 ']' 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85152 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85152 00:16:03.124 killing process with pid 85152 00:16:03.124 Received shutdown signal, test time was about 1.000000 seconds 00:16:03.124 00:16:03.124 Latency(us) 00:16:03.124 [2024-12-10T10:29:38.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.124 [2024-12-10T10:29:38.351Z] =================================================================================================================== 00:16:03.124 [2024-12-10T10:29:38.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85152' 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85152 00:16:03.124 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85152 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 85120 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85120 ']' 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85120 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85120 00:16:03.383 killing process with pid 85120 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85120' 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85120 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85120 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.383 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:03.383 "subsystems": [ 00:16:03.383 { 00:16:03.383 "subsystem": "keyring", 00:16:03.383 "config": [ 00:16:03.383 { 00:16:03.383 "method": "keyring_file_add_key", 00:16:03.383 "params": { 00:16:03.383 "name": "key0", 00:16:03.383 "path": "/tmp/tmp.Ie9mA82GKo" 00:16:03.383 } 00:16:03.383 } 00:16:03.383 ] 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "subsystem": "iobuf", 00:16:03.383 "config": [ 00:16:03.383 { 00:16:03.383 "method": "iobuf_set_options", 00:16:03.383 "params": { 00:16:03.383 "small_pool_count": 8192, 00:16:03.383 "large_pool_count": 1024, 00:16:03.383 "small_bufsize": 8192, 00:16:03.383 "large_bufsize": 135168 00:16:03.383 } 00:16:03.383 } 00:16:03.383 ] 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "subsystem": "sock", 00:16:03.383 "config": [ 00:16:03.383 { 00:16:03.383 "method": "sock_set_default_impl", 00:16:03.383 "params": { 00:16:03.383 "impl_name": "uring" 00:16:03.383 } 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "method": "sock_impl_set_options", 00:16:03.383 "params": { 00:16:03.383 "impl_name": "ssl", 00:16:03.383 "recv_buf_size": 4096, 00:16:03.383 "send_buf_size": 4096, 00:16:03.383 "enable_recv_pipe": true, 00:16:03.383 "enable_quickack": false, 00:16:03.383 "enable_placement_id": 0, 00:16:03.383 "enable_zerocopy_send_server": true, 00:16:03.383 "enable_zerocopy_send_client": false, 00:16:03.383 "zerocopy_threshold": 0, 00:16:03.383 "tls_version": 0, 00:16:03.383 "enable_ktls": false 00:16:03.383 } 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "method": "sock_impl_set_options", 00:16:03.383 "params": { 00:16:03.383 "impl_name": "posix", 00:16:03.383 "recv_buf_size": 2097152, 00:16:03.383 "send_buf_size": 2097152, 00:16:03.383 "enable_recv_pipe": true, 00:16:03.383 "enable_quickack": false, 00:16:03.383 "enable_placement_id": 0, 00:16:03.383 "enable_zerocopy_send_server": true, 00:16:03.383 "enable_zerocopy_send_client": false, 00:16:03.383 "zerocopy_threshold": 0, 00:16:03.383 "tls_version": 0, 00:16:03.383 "enable_ktls": false 00:16:03.383 } 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "method": "sock_impl_set_options", 00:16:03.383 "params": { 00:16:03.383 "impl_name": "uring", 00:16:03.383 "recv_buf_size": 2097152, 00:16:03.383 "send_buf_size": 2097152, 00:16:03.383 "enable_recv_pipe": true, 00:16:03.383 "enable_quickack": false, 00:16:03.383 "enable_placement_id": 0, 00:16:03.383 "enable_zerocopy_send_server": false, 00:16:03.383 "enable_zerocopy_send_client": false, 00:16:03.383 "zerocopy_threshold": 0, 00:16:03.383 "tls_version": 0, 00:16:03.383 "enable_ktls": false 00:16:03.383 } 00:16:03.383 } 00:16:03.383 ] 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "subsystem": "vmd", 00:16:03.384 "config": [] 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "subsystem": "accel", 00:16:03.384 "config": [ 00:16:03.384 { 00:16:03.384 "method": "accel_set_options", 00:16:03.384 "params": { 00:16:03.384 "small_cache_size": 128, 00:16:03.384 "large_cache_size": 16, 00:16:03.384 "task_count": 2048, 00:16:03.384 "sequence_count": 2048, 00:16:03.384 "buf_count": 2048 00:16:03.384 } 00:16:03.384 } 00:16:03.384 ] 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "subsystem": "bdev", 00:16:03.384 "config": [ 00:16:03.384 { 00:16:03.384 "method": "bdev_set_options", 00:16:03.384 "params": { 00:16:03.384 "bdev_io_pool_size": 65535, 00:16:03.384 "bdev_io_cache_size": 256, 00:16:03.384 "bdev_auto_examine": true, 00:16:03.384 "iobuf_small_cache_size": 128, 00:16:03.384 "iobuf_large_cache_size": 16 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "bdev_raid_set_options", 00:16:03.384 "params": { 00:16:03.384 "process_window_size_kb": 1024, 00:16:03.384 "process_max_bandwidth_mb_sec": 0 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "bdev_iscsi_set_options", 00:16:03.384 "params": { 00:16:03.384 "timeout_sec": 30 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "bdev_nvme_set_options", 00:16:03.384 "params": { 00:16:03.384 "action_on_timeout": "none", 00:16:03.384 "timeout_us": 0, 00:16:03.384 "timeout_admin_us": 0, 00:16:03.384 "keep_alive_timeout_ms": 10000, 00:16:03.384 "arbitration_burst": 0, 00:16:03.384 "low_priority_weight": 0, 00:16:03.384 "medium_priority_weight": 0, 00:16:03.384 "high_priority_weight": 0, 00:16:03.384 "nvme_adminq_poll_period_us": 10000, 00:16:03.384 "nvme_ioq_poll_period_us": 0, 00:16:03.384 "io_queue_requests": 0, 00:16:03.384 "delay_cmd_submit": true, 00:16:03.384 "transport_retry_count": 4, 00:16:03.384 "bdev_retry_count": 3, 00:16:03.384 "transport_ack_timeout": 0, 00:16:03.384 "ctrlr_loss_timeout_sec": 0, 00:16:03.384 "reconnect_delay_sec": 0, 00:16:03.384 "fast_io_fail_timeout_sec": 0, 00:16:03.384 "disable_auto_failback": false, 00:16:03.384 "generate_uuids": false, 00:16:03.384 "transport_tos": 0, 00:16:03.384 "nvme_error_stat": false, 00:16:03.384 "rdma_srq_size": 0, 00:16:03.384 "io_path_stat": false, 00:16:03.384 "allow_accel_sequence": false, 00:16:03.384 "rdma_max_cq_size": 0, 00:16:03.384 "rdma_cm_event_timeout_ms": 0, 00:16:03.384 "dhchap_digests": [ 00:16:03.384 "sha256", 00:16:03.384 "sha384", 00:16:03.384 "sha512" 00:16:03.384 ], 00:16:03.384 "dhchap_dhgroups": [ 00:16:03.384 "null", 00:16:03.384 "ffdhe2048", 00:16:03.384 "ffdhe3072", 00:16:03.384 "ffdhe4096", 00:16:03.384 "ffdhe6144", 00:16:03.384 "ffdhe8192" 00:16:03.384 ] 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "bdev_nvme_set_hotplug", 00:16:03.384 "params": { 00:16:03.384 "period_us": 100000, 00:16:03.384 "enable": false 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "bdev_malloc_create", 00:16:03.384 "params": { 00:16:03.384 "name": "malloc0", 00:16:03.384 "num_blocks": 8192, 00:16:03.384 "block_size": 4096, 00:16:03.384 "physical_block_size": 4096, 00:16:03.384 "uuid": "ae991623-dc4a-43f6-8788-ba0df12578c4", 00:16:03.384 "optimal_io_boundary": 0, 00:16:03.384 "md_size": 0, 00:16:03.384 "dif_type": 0, 00:16:03.384 "dif_is_head_of_md": false, 00:16:03.384 "dif_pi_format": 0 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "bdev_wait_for_examine" 00:16:03.384 } 00:16:03.384 ] 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "subsystem": "nbd", 00:16:03.384 "config": [] 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "subsystem": "scheduler", 00:16:03.384 "config": [ 00:16:03.384 { 00:16:03.384 "method": "framework_set_scheduler", 00:16:03.384 "params": { 00:16:03.384 "name": "static" 00:16:03.384 } 00:16:03.384 } 00:16:03.384 ] 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "subsystem": "nvmf", 00:16:03.384 "config": [ 00:16:03.384 { 00:16:03.384 "method": "nvmf_set_config", 00:16:03.384 "params": { 00:16:03.384 "discovery_filter": "match_any", 00:16:03.384 "admin_cmd_passthru": { 00:16:03.384 "identify_ctrlr": false 00:16:03.384 }, 00:16:03.384 "dhchap_digests": [ 00:16:03.384 "sha256", 00:16:03.384 "sha384", 00:16:03.384 "sha512" 00:16:03.384 ], 00:16:03.384 "dhchap_dhgroups": [ 00:16:03.384 "null", 00:16:03.384 "ffdhe2048", 00:16:03.384 "ffdhe3072", 00:16:03.384 "ffdhe4096", 00:16:03.384 "ffdhe6144", 00:16:03.384 "ffdhe8192" 00:16:03.384 ] 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_set_max_subsystems", 00:16:03.384 "params": { 00:16:03.384 "max_subsystems": 1024 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_set_crdt", 00:16:03.384 "params": { 00:16:03.384 "crdt1": 0, 00:16:03.384 "crdt2": 0, 00:16:03.384 "crdt3": 0 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_create_transport", 00:16:03.384 "params": { 00:16:03.384 "trtype": "TCP", 00:16:03.384 "max_queue_depth": 128, 00:16:03.384 "max_io_qpairs_per_ctrlr": 127, 00:16:03.384 "in_capsule_data_size": 4096, 00:16:03.384 "max_io_size": 131072, 00:16:03.384 "io_unit_size": 131072, 00:16:03.384 "max_aq_depth": 128, 00:16:03.384 "num_shared_buffers": 511, 00:16:03.384 "buf_cache_size": 4294967295, 00:16:03.384 "dif_insert_or_strip": false, 00:16:03.384 "zcopy": false, 00:16:03.384 "c2h_success": false, 00:16:03.384 "sock_priority": 0, 00:16:03.384 "abort_timeout_sec": 1, 00:16:03.384 "ack_timeout": 0, 00:16:03.384 "data_wr_pool_size": 0 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_create_subsystem", 00:16:03.384 "params": { 00:16:03.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.384 "allow_any_host": false, 00:16:03.384 "serial_number": "00000000000000000000", 00:16:03.384 "model_number": "SPDK bdev Controller", 00:16:03.384 "max_namespaces": 32, 00:16:03.384 "min_cntlid": 1, 00:16:03.384 "max_cntlid": 65519, 00:16:03.384 "ana_reporting": false 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_subsystem_add_host", 00:16:03.384 "params": { 00:16:03.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.384 "host": "nqn.2016-06.io.spdk:host1", 00:16:03.384 "psk": "key0" 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_subsystem_add_ns", 00:16:03.384 "params": { 00:16:03.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.384 "namespace": { 00:16:03.384 "nsid": 1, 00:16:03.384 "bdev_name": "malloc0", 00:16:03.384 "nguid": "AE991623DC4A43F68788BA0DF12578C4", 00:16:03.384 "uuid": "ae991623-dc4a-43f6-8788-ba0df12578c4", 00:16:03.384 "no_auto_visible": false 00:16:03.384 } 00:16:03.384 } 00:16:03.384 }, 00:16:03.384 { 00:16:03.384 "method": "nvmf_subsystem_add_listener", 00:16:03.384 "params": { 00:16:03.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.384 "listen_address": { 00:16:03.384 "trtype": "TCP", 00:16:03.384 "adrfam": "IPv4", 00:16:03.384 "traddr": "10.0.0.3", 00:16:03.384 "trsvcid": "4420" 00:16:03.384 }, 00:16:03.384 "secure_channel": false, 00:16:03.384 "sock_impl": "ssl" 00:16:03.384 } 00:16:03.384 } 00:16:03.384 ] 00:16:03.384 } 00:16:03.384 ] 00:16:03.384 }' 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85194 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85194 00:16:03.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85194 ']' 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.384 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.643 [2024-12-10 10:29:38.616757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:03.643 [2024-12-10 10:29:38.617058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.643 [2024-12-10 10:29:38.756570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.643 [2024-12-10 10:29:38.788426] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.643 [2024-12-10 10:29:38.788501] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.643 [2024-12-10 10:29:38.788512] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.643 [2024-12-10 10:29:38.788519] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.643 [2024-12-10 10:29:38.788526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.643 [2024-12-10 10:29:38.788604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.903 [2024-12-10 10:29:38.926854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.903 [2024-12-10 10:29:38.978670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.903 [2024-12-10 10:29:39.018036] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:03.903 [2024-12-10 10:29:39.018207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85226 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85226 /var/tmp/bdevperf.sock 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85226 ']' 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.470 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:04.470 "subsystems": [ 00:16:04.470 { 00:16:04.470 "subsystem": "keyring", 00:16:04.470 "config": [ 00:16:04.470 { 00:16:04.470 "method": "keyring_file_add_key", 00:16:04.470 "params": { 00:16:04.470 "name": "key0", 00:16:04.470 "path": "/tmp/tmp.Ie9mA82GKo" 00:16:04.471 } 00:16:04.471 } 00:16:04.471 ] 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "subsystem": "iobuf", 00:16:04.471 "config": [ 00:16:04.471 { 00:16:04.471 "method": "iobuf_set_options", 00:16:04.471 "params": { 00:16:04.471 "small_pool_count": 8192, 00:16:04.471 "large_pool_count": 1024, 00:16:04.471 "small_bufsize": 8192, 00:16:04.471 "large_bufsize": 135168 00:16:04.471 } 00:16:04.471 } 00:16:04.471 ] 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "subsystem": "sock", 00:16:04.471 "config": [ 00:16:04.471 { 00:16:04.471 "method": "sock_set_default_impl", 00:16:04.471 "params": { 00:16:04.471 "impl_name": "uring" 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "sock_impl_set_options", 00:16:04.471 "params": { 00:16:04.471 "impl_name": "ssl", 00:16:04.471 "recv_buf_size": 4096, 00:16:04.471 "send_buf_size": 4096, 00:16:04.471 "enable_recv_pipe": true, 00:16:04.471 "enable_quickack": false, 00:16:04.471 "enable_placement_id": 0, 00:16:04.471 "enable_zerocopy_send_server": true, 00:16:04.471 "enable_zerocopy_send_client": false, 00:16:04.471 "zerocopy_threshold": 0, 00:16:04.471 "tls_version": 0, 00:16:04.471 "enable_ktls": false 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "sock_impl_set_options", 00:16:04.471 "params": { 00:16:04.471 "impl_name": "posix", 00:16:04.471 "recv_buf_size": 2097152, 00:16:04.471 "send_buf_size": 2097152, 00:16:04.471 "enable_recv_pipe": true, 00:16:04.471 "enable_quickack": false, 00:16:04.471 "enable_placement_id": 0, 00:16:04.471 "enable_zerocopy_send_server": true, 00:16:04.471 "enable_zerocopy_send_client": false, 00:16:04.471 "zerocopy_threshold": 0, 00:16:04.471 "tls_version": 0, 00:16:04.471 "enable_ktls": false 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "sock_impl_set_options", 00:16:04.471 "params": { 00:16:04.471 "impl_name": "uring", 00:16:04.471 "recv_buf_size": 2097152, 00:16:04.471 "send_buf_size": 2097152, 00:16:04.471 "enable_recv_pipe": true, 00:16:04.471 "enable_quickack": false, 00:16:04.471 "enable_placement_id": 0, 00:16:04.471 "enable_zerocopy_send_server": false, 00:16:04.471 "enable_zerocopy_send_client": false, 00:16:04.471 "zerocopy_threshold": 0, 00:16:04.471 "tls_version": 0, 00:16:04.471 "enable_ktls": false 00:16:04.471 } 00:16:04.471 } 00:16:04.471 ] 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "subsystem": "vmd", 00:16:04.471 "config": [] 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "subsystem": "accel", 00:16:04.471 "config": [ 00:16:04.471 { 00:16:04.471 "method": "accel_set_options", 00:16:04.471 "params": { 00:16:04.471 "small_cache_size": 128, 00:16:04.471 "large_cache_size": 16, 00:16:04.471 "task_count": 2048, 00:16:04.471 "sequence_count": 2048, 00:16:04.471 "buf_count": 2048 00:16:04.471 } 00:16:04.471 } 00:16:04.471 ] 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "subsystem": "bdev", 00:16:04.471 "config": [ 00:16:04.471 { 00:16:04.471 "method": "bdev_set_options", 00:16:04.471 "params": { 00:16:04.471 "bdev_io_pool_size": 65535, 00:16:04.471 "bdev_io_cache_size": 256, 00:16:04.471 "bdev_auto_examine": true, 00:16:04.471 "iobuf_small_cache_size": 128, 00:16:04.471 "iobuf_large_cache_size": 16 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_raid_set_options", 00:16:04.471 "params": { 00:16:04.471 "process_window_size_kb": 1024, 00:16:04.471 "process_max_bandwidth_mb_sec": 0 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_iscsi_set_options", 00:16:04.471 "params": { 00:16:04.471 "timeout_sec": 30 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_nvme_set_options", 00:16:04.471 "params": { 00:16:04.471 "action_on_timeout": "none", 00:16:04.471 "timeout_us": 0, 00:16:04.471 "timeout_admin_us": 0, 00:16:04.471 "keep_alive_timeout_ms": 10000, 00:16:04.471 "arbitration_burst": 0, 00:16:04.471 "low_priority_weight": 0, 00:16:04.471 "medium_priority_weight": 0, 00:16:04.471 "high_priority_weight": 0, 00:16:04.471 "nvme_adminq_poll_period_us": 10000, 00:16:04.471 "nvme_ioq_poll_period_us": 0, 00:16:04.471 "io_queue_requests": 512, 00:16:04.471 "delay_cmd_submit": true, 00:16:04.471 "transport_retry_count": 4, 00:16:04.471 "bdev_retry_count": 3, 00:16:04.471 "transport_ack_timeout": 0, 00:16:04.471 "ctrlr_loss_timeout_sec": 0, 00:16:04.471 "reconnect_delay_sec": 0, 00:16:04.471 "fast_io_fail_timeout_sec": 0, 00:16:04.471 "disable_auto_failback": false, 00:16:04.471 "generate_uuids": false, 00:16:04.471 "transport_tos": 0, 00:16:04.471 "nvme_error_stat": false, 00:16:04.471 "rdma_srq_size": 0, 00:16:04.471 "io_path_stat": false, 00:16:04.471 "allow_accel_sequence": false, 00:16:04.471 "rdma_max_cq_size": 0, 00:16:04.471 "rdma_cm_event_timeout_ms": 0, 00:16:04.471 "dhchap_digests": [ 00:16:04.471 "sha256", 00:16:04.471 "sha384", 00:16:04.471 "sha512" 00:16:04.471 ], 00:16:04.471 "dhchap_dhgroups": [ 00:16:04.471 "null", 00:16:04.471 "ffdhe2048", 00:16:04.471 "ffdhe3072", 00:16:04.471 "ffdhe4096", 00:16:04.471 "ffdhe6144", 00:16:04.471 "ffdhe8192" 00:16:04.471 ] 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_nvme_attach_controller", 00:16:04.471 "params": { 00:16:04.471 "name": "nvme0", 00:16:04.471 "trtype": "TCP", 00:16:04.471 "adrfam": "IPv4", 00:16:04.471 "traddr": "10.0.0.3", 00:16:04.471 "trsvcid": "4420", 00:16:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.471 "prchk_reftag": false, 00:16:04.471 "prchk_guard": false, 00:16:04.471 "ctrlr_loss_timeout_sec": 0, 00:16:04.471 "reconnect_delay_sec": 0, 00:16:04.471 "fast_io_fail_timeout_sec": 0, 00:16:04.471 "psk": "key0", 00:16:04.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:04.471 "hdgst": false, 00:16:04.471 "ddgst": false 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_nvme_set_hotplug", 00:16:04.471 "params": { 00:16:04.471 "period_us": 100000, 00:16:04.471 "enable": false 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_enable_histogram", 00:16:04.471 "params": { 00:16:04.471 "name": "nvme0n1", 00:16:04.471 "enable": true 00:16:04.471 } 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "method": "bdev_wait_for_examine" 00:16:04.471 } 00:16:04.471 ] 00:16:04.471 }, 00:16:04.471 { 00:16:04.471 "subsystem": "nbd", 00:16:04.471 "config": [] 00:16:04.471 } 00:16:04.471 ] 00:16:04.471 }' 00:16:04.471 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.471 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.471 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.471 [2024-12-10 10:29:39.644516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:04.471 [2024-12-10 10:29:39.644801] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85226 ] 00:16:04.729 [2024-12-10 10:29:39.777109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.729 [2024-12-10 10:29:39.810824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.729 [2024-12-10 10:29:39.920697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.729 [2024-12-10 10:29:39.949567] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.665 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.665 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:05.665 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:05.665 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:05.924 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.924 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:05.924 Running I/O for 1 seconds... 00:16:06.860 4737.00 IOPS, 18.50 MiB/s 00:16:06.860 Latency(us) 00:16:06.860 [2024-12-10T10:29:42.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.860 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.860 Verification LBA range: start 0x0 length 0x2000 00:16:06.860 nvme0n1 : 1.01 4802.96 18.76 0.00 0.00 26456.20 4438.57 20614.05 00:16:06.860 [2024-12-10T10:29:42.087Z] =================================================================================================================== 00:16:06.860 [2024-12-10T10:29:42.087Z] Total : 4802.96 18.76 0.00 0.00 26456.20 4438.57 20614.05 00:16:06.860 { 00:16:06.860 "results": [ 00:16:06.860 { 00:16:06.860 "job": "nvme0n1", 00:16:06.860 "core_mask": "0x2", 00:16:06.860 "workload": "verify", 00:16:06.860 "status": "finished", 00:16:06.860 "verify_range": { 00:16:06.860 "start": 0, 00:16:06.860 "length": 8192 00:16:06.860 }, 00:16:06.860 "queue_depth": 128, 00:16:06.860 "io_size": 4096, 00:16:06.860 "runtime": 1.013126, 00:16:06.860 "iops": 4802.956394367532, 00:16:06.860 "mibps": 18.761548415498172, 00:16:06.860 "io_failed": 0, 00:16:06.860 "io_timeout": 0, 00:16:06.860 "avg_latency_us": 26456.202201546912, 00:16:06.860 "min_latency_us": 4438.574545454546, 00:16:06.860 "max_latency_us": 20614.05090909091 00:16:06.860 } 00:16:06.860 ], 00:16:06.860 "core_count": 1 00:16:06.860 } 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:06.860 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:06.860 nvmf_trace.0 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85226 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85226 ']' 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85226 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85226 00:16:07.119 killing process with pid 85226 00:16:07.119 Received shutdown signal, test time was about 1.000000 seconds 00:16:07.119 00:16:07.119 Latency(us) 00:16:07.119 [2024-12-10T10:29:42.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.119 [2024-12-10T10:29:42.346Z] =================================================================================================================== 00:16:07.119 [2024-12-10T10:29:42.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85226' 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85226 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85226 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:07.119 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.379 rmmod nvme_tcp 00:16:07.379 rmmod nvme_fabrics 00:16:07.379 rmmod nvme_keyring 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 85194 ']' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 85194 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85194 ']' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85194 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85194 00:16:07.379 killing process with pid 85194 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85194' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85194 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85194 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:07.379 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZUEoGohFGy /tmp/tmp.lj9Yp8yjza /tmp/tmp.Ie9mA82GKo 00:16:07.638 00:16:07.638 real 1m22.199s 00:16:07.638 user 2m14.240s 00:16:07.638 sys 0m26.126s 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.638 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.638 ************************************ 00:16:07.638 END TEST nvmf_tls 00:16:07.638 ************************************ 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:07.898 ************************************ 00:16:07.898 START TEST nvmf_fips 00:16:07.898 ************************************ 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:07.898 * Looking for test storage... 00:16:07.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:16:07.898 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.898 --rc genhtml_branch_coverage=1 00:16:07.898 --rc genhtml_function_coverage=1 00:16:07.898 --rc genhtml_legend=1 00:16:07.898 --rc geninfo_all_blocks=1 00:16:07.898 --rc geninfo_unexecuted_blocks=1 00:16:07.898 00:16:07.898 ' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.898 --rc genhtml_branch_coverage=1 00:16:07.898 --rc genhtml_function_coverage=1 00:16:07.898 --rc genhtml_legend=1 00:16:07.898 --rc geninfo_all_blocks=1 00:16:07.898 --rc geninfo_unexecuted_blocks=1 00:16:07.898 00:16:07.898 ' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.898 --rc genhtml_branch_coverage=1 00:16:07.898 --rc genhtml_function_coverage=1 00:16:07.898 --rc genhtml_legend=1 00:16:07.898 --rc geninfo_all_blocks=1 00:16:07.898 --rc geninfo_unexecuted_blocks=1 00:16:07.898 00:16:07.898 ' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.898 --rc genhtml_branch_coverage=1 00:16:07.898 --rc genhtml_function_coverage=1 00:16:07.898 --rc genhtml_legend=1 00:16:07.898 --rc geninfo_all_blocks=1 00:16:07.898 --rc geninfo_unexecuted_blocks=1 00:16:07.898 00:16:07.898 ' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:07.898 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:07.899 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:16:08.158 Error setting digest 00:16:08.158 40124F10177F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:08.158 40124F10177F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.158 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:08.159 Cannot find device "nvmf_init_br" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:08.159 Cannot find device "nvmf_init_br2" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:08.159 Cannot find device "nvmf_tgt_br" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.159 Cannot find device "nvmf_tgt_br2" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:08.159 Cannot find device "nvmf_init_br" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:08.159 Cannot find device "nvmf_init_br2" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:08.159 Cannot find device "nvmf_tgt_br" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:08.159 Cannot find device "nvmf_tgt_br2" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:08.159 Cannot find device "nvmf_br" 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:08.159 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:08.418 Cannot find device "nvmf_init_if" 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:08.418 Cannot find device "nvmf_init_if2" 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.418 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:08.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:16:08.419 00:16:08.419 --- 10.0.0.3 ping statistics --- 00:16:08.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.419 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:08.419 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:08.678 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:08.678 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:16:08.678 00:16:08.678 --- 10.0.0.4 ping statistics --- 00:16:08.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.678 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:08.678 00:16:08.678 --- 10.0.0.1 ping statistics --- 00:16:08.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.678 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:08.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:16:08.678 00:16:08.678 --- 10.0.0.2 ping statistics --- 00:16:08.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.678 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=85543 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 85543 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85543 ']' 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.678 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.678 [2024-12-10 10:29:43.782695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:08.678 [2024-12-10 10:29:43.783514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.937 [2024-12-10 10:29:43.927506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.937 [2024-12-10 10:29:43.968763] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.937 [2024-12-10 10:29:43.968824] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.937 [2024-12-10 10:29:43.968838] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.937 [2024-12-10 10:29:43.968848] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.937 [2024-12-10 10:29:43.968856] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.937 [2024-12-10 10:29:43.968895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.937 [2024-12-10 10:29:44.001738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Zez 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Zez 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Zez 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Zez 00:16:08.937 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.196 [2024-12-10 10:29:44.387759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.196 [2024-12-10 10:29:44.403703] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:09.196 [2024-12-10 10:29:44.404090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:09.454 malloc0 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85576 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85576 /var/tmp/bdevperf.sock 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85576 ']' 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.454 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:09.454 [2024-12-10 10:29:44.559951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:09.454 [2024-12-10 10:29:44.560252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85576 ] 00:16:09.712 [2024-12-10 10:29:44.702863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.712 [2024-12-10 10:29:44.744874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.712 [2024-12-10 10:29:44.779963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.712 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.712 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:09.712 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Zez 00:16:09.969 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:10.227 [2024-12-10 10:29:45.347376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:10.227 TLSTESTn1 00:16:10.227 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:10.485 Running I/O for 10 seconds... 00:16:12.353 4351.00 IOPS, 17.00 MiB/s [2024-12-10T10:29:48.954Z] 4352.00 IOPS, 17.00 MiB/s [2024-12-10T10:29:49.888Z] 4410.00 IOPS, 17.23 MiB/s [2024-12-10T10:29:50.824Z] 4437.25 IOPS, 17.33 MiB/s [2024-12-10T10:29:51.760Z] 4405.80 IOPS, 17.21 MiB/s [2024-12-10T10:29:52.757Z] 4413.33 IOPS, 17.24 MiB/s [2024-12-10T10:29:53.693Z] 4416.43 IOPS, 17.25 MiB/s [2024-12-10T10:29:54.629Z] 4426.88 IOPS, 17.29 MiB/s [2024-12-10T10:29:55.569Z] 4429.44 IOPS, 17.30 MiB/s [2024-12-10T10:29:55.569Z] 4433.50 IOPS, 17.32 MiB/s 00:16:20.342 Latency(us) 00:16:20.342 [2024-12-10T10:29:55.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.342 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:20.342 Verification LBA range: start 0x0 length 0x2000 00:16:20.342 TLSTESTn1 : 10.02 4439.04 17.34 0.00 0.00 28783.68 5779.08 23235.49 00:16:20.342 [2024-12-10T10:29:55.569Z] =================================================================================================================== 00:16:20.342 [2024-12-10T10:29:55.569Z] Total : 4439.04 17.34 0.00 0.00 28783.68 5779.08 23235.49 00:16:20.342 { 00:16:20.342 "results": [ 00:16:20.342 { 00:16:20.342 "job": "TLSTESTn1", 00:16:20.342 "core_mask": "0x4", 00:16:20.342 "workload": "verify", 00:16:20.342 "status": "finished", 00:16:20.342 "verify_range": { 00:16:20.342 "start": 0, 00:16:20.342 "length": 8192 00:16:20.342 }, 00:16:20.342 "queue_depth": 128, 00:16:20.342 "io_size": 4096, 00:16:20.342 "runtime": 10.015236, 00:16:20.342 "iops": 4439.036683708701, 00:16:20.342 "mibps": 17.339987045737114, 00:16:20.342 "io_failed": 0, 00:16:20.342 "io_timeout": 0, 00:16:20.342 "avg_latency_us": 28783.682623926976, 00:16:20.342 "min_latency_us": 5779.083636363636, 00:16:20.342 "max_latency_us": 23235.49090909091 00:16:20.342 } 00:16:20.342 ], 00:16:20.342 "core_count": 1 00:16:20.342 } 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:20.601 nvmf_trace.0 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85576 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85576 ']' 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85576 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85576 00:16:20.601 killing process with pid 85576 00:16:20.601 Received shutdown signal, test time was about 10.000000 seconds 00:16:20.601 00:16:20.601 Latency(us) 00:16:20.601 [2024-12-10T10:29:55.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.601 [2024-12-10T10:29:55.828Z] =================================================================================================================== 00:16:20.601 [2024-12-10T10:29:55.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85576' 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85576 00:16:20.601 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85576 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.859 rmmod nvme_tcp 00:16:20.859 rmmod nvme_fabrics 00:16:20.859 rmmod nvme_keyring 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 85543 ']' 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 85543 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85543 ']' 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85543 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85543 00:16:20.859 killing process with pid 85543 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85543' 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85543 00:16:20.859 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85543 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.118 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Zez 00:16:21.376 00:16:21.376 real 0m13.485s 00:16:21.376 user 0m18.373s 00:16:21.376 sys 0m5.575s 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:21.376 ************************************ 00:16:21.376 END TEST nvmf_fips 00:16:21.376 ************************************ 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.376 ************************************ 00:16:21.376 START TEST nvmf_control_msg_list 00:16:21.376 ************************************ 00:16:21.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:21.376 * Looking for test storage... 00:16:21.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.377 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.635 --rc genhtml_branch_coverage=1 00:16:21.635 --rc genhtml_function_coverage=1 00:16:21.635 --rc genhtml_legend=1 00:16:21.635 --rc geninfo_all_blocks=1 00:16:21.635 --rc geninfo_unexecuted_blocks=1 00:16:21.635 00:16:21.635 ' 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.635 --rc genhtml_branch_coverage=1 00:16:21.635 --rc genhtml_function_coverage=1 00:16:21.635 --rc genhtml_legend=1 00:16:21.635 --rc geninfo_all_blocks=1 00:16:21.635 --rc geninfo_unexecuted_blocks=1 00:16:21.635 00:16:21.635 ' 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.635 --rc genhtml_branch_coverage=1 00:16:21.635 --rc genhtml_function_coverage=1 00:16:21.635 --rc genhtml_legend=1 00:16:21.635 --rc geninfo_all_blocks=1 00:16:21.635 --rc geninfo_unexecuted_blocks=1 00:16:21.635 00:16:21.635 ' 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.635 --rc genhtml_branch_coverage=1 00:16:21.635 --rc genhtml_function_coverage=1 00:16:21.635 --rc genhtml_legend=1 00:16:21.635 --rc geninfo_all_blocks=1 00:16:21.635 --rc geninfo_unexecuted_blocks=1 00:16:21.635 00:16:21.635 ' 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.635 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.636 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:21.636 Cannot find device "nvmf_init_br" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:21.636 Cannot find device "nvmf_init_br2" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:21.636 Cannot find device "nvmf_tgt_br" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.636 Cannot find device "nvmf_tgt_br2" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:21.636 Cannot find device "nvmf_init_br" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:21.636 Cannot find device "nvmf_init_br2" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:21.636 Cannot find device "nvmf_tgt_br" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:21.636 Cannot find device "nvmf_tgt_br2" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:21.636 Cannot find device "nvmf_br" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:21.636 Cannot find device "nvmf_init_if" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:21.636 Cannot find device "nvmf_init_if2" 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:21.636 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:21.637 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:21.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:16:21.895 00:16:21.895 --- 10.0.0.3 ping statistics --- 00:16:21.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.895 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:21.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:21.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:21.895 00:16:21.895 --- 10.0.0.4 ping statistics --- 00:16:21.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.895 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:21.895 00:16:21.895 --- 10.0.0.1 ping statistics --- 00:16:21.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.895 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:21.895 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:21.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:21.895 00:16:21.895 --- 10.0.0.2 ping statistics --- 00:16:21.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.895 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=85950 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 85950 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 85950 ']' 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.895 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.895 [2024-12-10 10:29:57.094993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:21.895 [2024-12-10 10:29:57.095082] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.153 [2024-12-10 10:29:57.238244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.153 [2024-12-10 10:29:57.279003] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.153 [2024-12-10 10:29:57.279091] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.153 [2024-12-10 10:29:57.279117] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.153 [2024-12-10 10:29:57.279127] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.153 [2024-12-10 10:29:57.279136] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.153 [2024-12-10 10:29:57.279165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.153 [2024-12-10 10:29:57.311941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 [2024-12-10 10:29:58.105867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 Malloc0 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 [2024-12-10 10:29:58.160082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85988 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85989 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85990 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85988 00:16:23.089 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:23.348 [2024-12-10 10:29:58.338253] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:23.348 [2024-12-10 10:29:58.348633] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:23.348 [2024-12-10 10:29:58.349029] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:24.284 Initializing NVMe Controllers 00:16:24.284 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:24.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:24.284 Initialization complete. Launching workers. 00:16:24.284 ======================================================== 00:16:24.284 Latency(us) 00:16:24.284 Device Information : IOPS MiB/s Average min max 00:16:24.284 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3646.00 14.24 273.86 120.69 560.19 00:16:24.284 ======================================================== 00:16:24.284 Total : 3646.00 14.24 273.86 120.69 560.19 00:16:24.284 00:16:24.284 Initializing NVMe Controllers 00:16:24.284 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:24.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:24.284 Initialization complete. Launching workers. 00:16:24.284 ======================================================== 00:16:24.284 Latency(us) 00:16:24.284 Device Information : IOPS MiB/s Average min max 00:16:24.284 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3641.00 14.22 274.33 138.06 509.37 00:16:24.284 ======================================================== 00:16:24.284 Total : 3641.00 14.22 274.33 138.06 509.37 00:16:24.284 00:16:24.284 Initializing NVMe Controllers 00:16:24.284 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:24.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:24.284 Initialization complete. Launching workers. 00:16:24.284 ======================================================== 00:16:24.284 Latency(us) 00:16:24.284 Device Information : IOPS MiB/s Average min max 00:16:24.284 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3633.95 14.20 274.87 159.59 540.45 00:16:24.285 ======================================================== 00:16:24.285 Total : 3633.95 14.20 274.87 159.59 540.45 00:16:24.285 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85989 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85990 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.285 rmmod nvme_tcp 00:16:24.285 rmmod nvme_fabrics 00:16:24.285 rmmod nvme_keyring 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 85950 ']' 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 85950 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 85950 ']' 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 85950 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.285 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85950 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.544 killing process with pid 85950 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85950' 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 85950 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 85950 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:24.544 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:24.803 00:16:24.803 real 0m3.495s 00:16:24.803 user 0m5.603s 00:16:24.803 sys 0m1.282s 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:24.803 ************************************ 00:16:24.803 END TEST nvmf_control_msg_list 00:16:24.803 ************************************ 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.803 ************************************ 00:16:24.803 START TEST nvmf_wait_for_buf 00:16:24.803 ************************************ 00:16:24.803 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:25.063 * Looking for test storage... 00:16:25.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.063 --rc genhtml_branch_coverage=1 00:16:25.063 --rc genhtml_function_coverage=1 00:16:25.063 --rc genhtml_legend=1 00:16:25.063 --rc geninfo_all_blocks=1 00:16:25.063 --rc geninfo_unexecuted_blocks=1 00:16:25.063 00:16:25.063 ' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.063 --rc genhtml_branch_coverage=1 00:16:25.063 --rc genhtml_function_coverage=1 00:16:25.063 --rc genhtml_legend=1 00:16:25.063 --rc geninfo_all_blocks=1 00:16:25.063 --rc geninfo_unexecuted_blocks=1 00:16:25.063 00:16:25.063 ' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.063 --rc genhtml_branch_coverage=1 00:16:25.063 --rc genhtml_function_coverage=1 00:16:25.063 --rc genhtml_legend=1 00:16:25.063 --rc geninfo_all_blocks=1 00:16:25.063 --rc geninfo_unexecuted_blocks=1 00:16:25.063 00:16:25.063 ' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:25.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.063 --rc genhtml_branch_coverage=1 00:16:25.063 --rc genhtml_function_coverage=1 00:16:25.063 --rc genhtml_legend=1 00:16:25.063 --rc geninfo_all_blocks=1 00:16:25.063 --rc geninfo_unexecuted_blocks=1 00:16:25.063 00:16:25.063 ' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.063 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.064 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:25.064 Cannot find device "nvmf_init_br" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:25.064 Cannot find device "nvmf_init_br2" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:25.064 Cannot find device "nvmf_tgt_br" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.064 Cannot find device "nvmf_tgt_br2" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:25.064 Cannot find device "nvmf_init_br" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:25.064 Cannot find device "nvmf_init_br2" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:25.064 Cannot find device "nvmf_tgt_br" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:25.064 Cannot find device "nvmf_tgt_br2" 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:25.064 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:25.323 Cannot find device "nvmf_br" 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:25.323 Cannot find device "nvmf_init_if" 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:25.323 Cannot find device "nvmf_init_if2" 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.323 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:25.324 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.324 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:25.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:25.582 00:16:25.582 --- 10.0.0.3 ping statistics --- 00:16:25.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.582 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:25.582 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:25.582 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:16:25.582 00:16:25.582 --- 10.0.0.4 ping statistics --- 00:16:25.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.582 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:25.582 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:25.583 00:16:25.583 --- 10.0.0.1 ping statistics --- 00:16:25.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.583 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:25.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:25.583 00:16:25.583 --- 10.0.0.2 ping statistics --- 00:16:25.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.583 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=86222 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 86222 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 86222 ']' 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.583 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 [2024-12-10 10:30:00.676083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.583 [2024-12-10 10:30:00.676190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.842 [2024-12-10 10:30:00.817321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.842 [2024-12-10 10:30:00.849610] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.842 [2024-12-10 10:30:00.849696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.842 [2024-12-10 10:30:00.849722] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.842 [2024-12-10 10:30:00.849729] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.842 [2024-12-10 10:30:00.849736] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.842 [2024-12-10 10:30:00.849763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.842 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 [2024-12-10 10:30:01.014096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 Malloc0 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.842 [2024-12-10 10:30:01.054939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.842 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 [2024-12-10 10:30:01.079064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.101 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:26.101 [2024-12-10 10:30:01.264646] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:27.477 Initializing NVMe Controllers 00:16:27.477 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:27.477 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:27.477 Initialization complete. Launching workers. 00:16:27.477 ======================================================== 00:16:27.477 Latency(us) 00:16:27.477 Device Information : IOPS MiB/s Average min max 00:16:27.477 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 498.02 62.25 8032.45 7018.23 11054.51 00:16:27.477 ======================================================== 00:16:27.477 Total : 498.02 62.25 8032.45 7018.23 11054.51 00:16:27.477 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.477 rmmod nvme_tcp 00:16:27.477 rmmod nvme_fabrics 00:16:27.477 rmmod nvme_keyring 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 86222 ']' 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 86222 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 86222 ']' 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 86222 00:16:27.477 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86222 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.735 killing process with pid 86222 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86222' 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 86222 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 86222 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:27.735 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:27.995 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:27.995 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:27.995 00:16:27.995 real 0m3.102s 00:16:27.995 user 0m2.524s 00:16:27.995 sys 0m0.711s 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:27.995 ************************************ 00:16:27.995 END TEST nvmf_wait_for_buf 00:16:27.995 ************************************ 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.995 ************************************ 00:16:27.995 START TEST nvmf_fuzz 00:16:27.995 ************************************ 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:27.995 * Looking for test storage... 00:16:27.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:27.995 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:28.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.255 --rc genhtml_branch_coverage=1 00:16:28.255 --rc genhtml_function_coverage=1 00:16:28.255 --rc genhtml_legend=1 00:16:28.255 --rc geninfo_all_blocks=1 00:16:28.255 --rc geninfo_unexecuted_blocks=1 00:16:28.255 00:16:28.255 ' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:28.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.255 --rc genhtml_branch_coverage=1 00:16:28.255 --rc genhtml_function_coverage=1 00:16:28.255 --rc genhtml_legend=1 00:16:28.255 --rc geninfo_all_blocks=1 00:16:28.255 --rc geninfo_unexecuted_blocks=1 00:16:28.255 00:16:28.255 ' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:28.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.255 --rc genhtml_branch_coverage=1 00:16:28.255 --rc genhtml_function_coverage=1 00:16:28.255 --rc genhtml_legend=1 00:16:28.255 --rc geninfo_all_blocks=1 00:16:28.255 --rc geninfo_unexecuted_blocks=1 00:16:28.255 00:16:28.255 ' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:28.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.255 --rc genhtml_branch_coverage=1 00:16:28.255 --rc genhtml_function_coverage=1 00:16:28.255 --rc genhtml_legend=1 00:16:28.255 --rc geninfo_all_blocks=1 00:16:28.255 --rc geninfo_unexecuted_blocks=1 00:16:28.255 00:16:28.255 ' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:28.255 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:28.256 Cannot find device "nvmf_init_br" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:28.256 Cannot find device "nvmf_init_br2" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:28.256 Cannot find device "nvmf_tgt_br" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.256 Cannot find device "nvmf_tgt_br2" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:28.256 Cannot find device "nvmf_init_br" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:28.256 Cannot find device "nvmf_init_br2" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:28.256 Cannot find device "nvmf_tgt_br" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:28.256 Cannot find device "nvmf_tgt_br2" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:28.256 Cannot find device "nvmf_br" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:28.256 Cannot find device "nvmf_init_if" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:28.256 Cannot find device "nvmf_init_if2" 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:16:28.256 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:28.514 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:28.515 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.515 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:28.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.442 ms 00:16:28.774 00:16:28.774 --- 10.0.0.3 ping statistics --- 00:16:28.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.774 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:28.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:28.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:28.774 00:16:28.774 --- 10.0.0.4 ping statistics --- 00:16:28.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.774 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:28.774 00:16:28.774 --- 10.0.0.1 ping statistics --- 00:16:28.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.774 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:28.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:28.774 00:16:28.774 --- 10.0.0.2 ping statistics --- 00:16:28.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.774 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86476 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86476 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 86476 ']' 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.774 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 Malloc0 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:16:29.033 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:16:29.292 Shutting down the fuzz application 00:16:29.292 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:16:29.551 Shutting down the fuzz application 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:29.551 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.810 rmmod nvme_tcp 00:16:29.810 rmmod nvme_fabrics 00:16:29.810 rmmod nvme_keyring 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 86476 ']' 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 86476 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 86476 ']' 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 86476 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86476 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.810 killing process with pid 86476 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86476' 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 86476 00:16:29.810 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 86476 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:30.069 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:16:30.328 00:16:30.328 real 0m2.245s 00:16:30.328 user 0m1.862s 00:16:30.328 sys 0m0.675s 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:30.328 ************************************ 00:16:30.328 END TEST nvmf_fuzz 00:16:30.328 ************************************ 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.328 ************************************ 00:16:30.328 START TEST nvmf_multiconnection 00:16:30.328 ************************************ 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:30.328 * Looking for test storage... 00:16:30.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:16:30.328 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:30.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.587 --rc genhtml_branch_coverage=1 00:16:30.587 --rc genhtml_function_coverage=1 00:16:30.587 --rc genhtml_legend=1 00:16:30.587 --rc geninfo_all_blocks=1 00:16:30.587 --rc geninfo_unexecuted_blocks=1 00:16:30.587 00:16:30.587 ' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:30.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.587 --rc genhtml_branch_coverage=1 00:16:30.587 --rc genhtml_function_coverage=1 00:16:30.587 --rc genhtml_legend=1 00:16:30.587 --rc geninfo_all_blocks=1 00:16:30.587 --rc geninfo_unexecuted_blocks=1 00:16:30.587 00:16:30.587 ' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:30.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.587 --rc genhtml_branch_coverage=1 00:16:30.587 --rc genhtml_function_coverage=1 00:16:30.587 --rc genhtml_legend=1 00:16:30.587 --rc geninfo_all_blocks=1 00:16:30.587 --rc geninfo_unexecuted_blocks=1 00:16:30.587 00:16:30.587 ' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:30.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.587 --rc genhtml_branch_coverage=1 00:16:30.587 --rc genhtml_function_coverage=1 00:16:30.587 --rc genhtml_legend=1 00:16:30.587 --rc geninfo_all_blocks=1 00:16:30.587 --rc geninfo_unexecuted_blocks=1 00:16:30.587 00:16:30.587 ' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.587 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.588 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.588 Cannot find device "nvmf_init_br" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.588 Cannot find device "nvmf_init_br2" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.588 Cannot find device "nvmf_tgt_br" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.588 Cannot find device "nvmf_tgt_br2" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.588 Cannot find device "nvmf_init_br" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.588 Cannot find device "nvmf_init_br2" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.588 Cannot find device "nvmf_tgt_br" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.588 Cannot find device "nvmf_tgt_br2" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.588 Cannot find device "nvmf_br" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.588 Cannot find device "nvmf_init_if" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.588 Cannot find device "nvmf_init_if2" 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.588 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.848 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:30.848 00:16:30.848 --- 10.0.0.3 ping statistics --- 00:16:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.848 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.848 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.848 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:30.848 00:16:30.848 --- 10.0.0.4 ping statistics --- 00:16:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.848 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:30.848 00:16:30.848 --- 10.0.0.1 ping statistics --- 00:16:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.848 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:30.848 00:16:30.848 --- 10.0.0.2 ping statistics --- 00:16:30.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.848 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=86706 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 86706 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 86706 ']' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.848 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.107 [2024-12-10 10:30:06.127736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:31.107 [2024-12-10 10:30:06.128461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.107 [2024-12-10 10:30:06.266393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.107 [2024-12-10 10:30:06.304952] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.107 [2024-12-10 10:30:06.305013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.107 [2024-12-10 10:30:06.305038] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.107 [2024-12-10 10:30:06.305045] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.107 [2024-12-10 10:30:06.305052] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.107 [2024-12-10 10:30:06.305321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.107 [2024-12-10 10:30:06.305472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.107 [2024-12-10 10:30:06.305695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.107 [2024-12-10 10:30:06.305697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.366 [2024-12-10 10:30:06.336257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 [2024-12-10 10:30:06.439130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 Malloc1 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 [2024-12-10 10:30:06.498147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 Malloc2 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 Malloc3 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.366 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.625 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.625 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 Malloc4 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 Malloc5 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 Malloc6 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 Malloc7 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 Malloc8 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.626 Malloc9 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.626 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 Malloc10 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.885 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.886 Malloc11 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.886 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:31.886 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:31.886 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.886 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.886 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:31.886 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:34.446 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:36.350 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:38.253 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:38.511 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:38.512 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.512 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.512 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.512 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:40.412 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:40.670 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:40.670 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:40.670 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.670 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:40.670 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:42.573 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:42.832 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:42.832 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:42.832 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.832 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:42.832 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.734 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:44.993 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:44.993 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:44.993 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.993 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:44.993 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.895 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:47.154 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:47.154 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.154 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.154 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:47.154 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:49.059 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:49.318 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:49.318 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:49.318 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.318 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:49.318 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.243 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:51.502 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:51.502 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:51.502 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.502 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:51.502 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:53.405 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:53.665 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:53.665 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:53.665 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.665 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:53.665 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:55.566 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:55.824 [global] 00:16:55.824 thread=1 00:16:55.824 invalidate=1 00:16:55.824 rw=read 00:16:55.824 time_based=1 00:16:55.824 runtime=10 00:16:55.824 ioengine=libaio 00:16:55.824 direct=1 00:16:55.824 bs=262144 00:16:55.824 iodepth=64 00:16:55.824 norandommap=1 00:16:55.824 numjobs=1 00:16:55.824 00:16:55.824 [job0] 00:16:55.824 filename=/dev/nvme0n1 00:16:55.824 [job1] 00:16:55.824 filename=/dev/nvme10n1 00:16:55.824 [job2] 00:16:55.824 filename=/dev/nvme1n1 00:16:55.824 [job3] 00:16:55.824 filename=/dev/nvme2n1 00:16:55.824 [job4] 00:16:55.824 filename=/dev/nvme3n1 00:16:55.824 [job5] 00:16:55.824 filename=/dev/nvme4n1 00:16:55.824 [job6] 00:16:55.824 filename=/dev/nvme5n1 00:16:55.824 [job7] 00:16:55.824 filename=/dev/nvme6n1 00:16:55.824 [job8] 00:16:55.824 filename=/dev/nvme7n1 00:16:55.824 [job9] 00:16:55.824 filename=/dev/nvme8n1 00:16:55.824 [job10] 00:16:55.824 filename=/dev/nvme9n1 00:16:55.824 Could not set queue depth (nvme0n1) 00:16:55.824 Could not set queue depth (nvme10n1) 00:16:55.824 Could not set queue depth (nvme1n1) 00:16:55.824 Could not set queue depth (nvme2n1) 00:16:55.824 Could not set queue depth (nvme3n1) 00:16:55.824 Could not set queue depth (nvme4n1) 00:16:55.824 Could not set queue depth (nvme5n1) 00:16:55.824 Could not set queue depth (nvme6n1) 00:16:55.824 Could not set queue depth (nvme7n1) 00:16:55.824 Could not set queue depth (nvme8n1) 00:16:55.824 Could not set queue depth (nvme9n1) 00:16:56.083 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:56.083 fio-3.35 00:16:56.083 Starting 11 threads 00:17:08.296 00:17:08.296 job0: (groupid=0, jobs=1): err= 0: pid=87162: Tue Dec 10 10:30:41 2024 00:17:08.296 read: IOPS=133, BW=33.4MiB/s (35.1MB/s)(340MiB/10170msec) 00:17:08.296 slat (usec): min=21, max=404781, avg=7367.06, stdev=23014.55 00:17:08.296 clat (msec): min=11, max=729, avg=470.67, stdev=111.15 00:17:08.296 lat (msec): min=12, max=822, avg=478.03, stdev=111.72 00:17:08.296 clat percentiles (msec): 00:17:08.296 | 1.00th=[ 153], 5.00th=[ 207], 10.00th=[ 359], 20.00th=[ 414], 00:17:08.296 | 30.00th=[ 443], 40.00th=[ 460], 50.00th=[ 472], 60.00th=[ 489], 00:17:08.296 | 70.00th=[ 510], 80.00th=[ 531], 90.00th=[ 617], 95.00th=[ 651], 00:17:08.296 | 99.00th=[ 709], 99.50th=[ 709], 99.90th=[ 709], 99.95th=[ 726], 00:17:08.296 | 99.99th=[ 726] 00:17:08.296 bw ( KiB/s): min=18432, max=43520, per=6.66%, avg=33158.90, stdev=5034.12, samples=20 00:17:08.296 iops : min= 72, max= 170, avg=129.30, stdev=19.73, samples=20 00:17:08.296 lat (msec) : 20=0.44%, 250=5.37%, 500=58.97%, 750=35.22% 00:17:08.296 cpu : usr=0.10%, sys=0.66%, ctx=238, majf=0, minf=4097 00:17:08.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.4% 00:17:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.296 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.296 job1: (groupid=0, jobs=1): err= 0: pid=87163: Tue Dec 10 10:30:41 2024 00:17:08.296 read: IOPS=210, BW=52.6MiB/s (55.2MB/s)(533MiB/10127msec) 00:17:08.296 slat (usec): min=21, max=228057, avg=4687.08, stdev=12012.08 00:17:08.296 clat (msec): min=49, max=462, avg=298.97, stdev=48.03 00:17:08.296 lat (msec): min=50, max=503, avg=303.65, stdev=48.06 00:17:08.296 clat percentiles (msec): 00:17:08.296 | 1.00th=[ 165], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 271], 00:17:08.296 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 300], 00:17:08.296 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 363], 95.00th=[ 401], 00:17:08.296 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 464], 00:17:08.296 | 99.99th=[ 464] 00:17:08.296 bw ( KiB/s): min=33280, max=60928, per=10.63%, avg=52930.45, stdev=7016.79, samples=20 00:17:08.296 iops : min= 130, max= 238, avg=206.60, stdev=27.46, samples=20 00:17:08.296 lat (msec) : 50=0.05%, 100=0.52%, 250=6.89%, 500=92.54% 00:17:08.296 cpu : usr=0.17%, sys=0.98%, ctx=443, majf=0, minf=4097 00:17:08.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:17:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.296 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.296 job2: (groupid=0, jobs=1): err= 0: pid=87164: Tue Dec 10 10:30:41 2024 00:17:08.296 read: IOPS=214, BW=53.7MiB/s (56.4MB/s)(544MiB/10122msec) 00:17:08.296 slat (usec): min=20, max=215137, avg=4599.02, stdev=11717.07 00:17:08.296 clat (msec): min=78, max=446, avg=292.74, stdev=45.62 00:17:08.296 lat (msec): min=78, max=446, avg=297.34, stdev=46.09 00:17:08.296 clat percentiles (msec): 00:17:08.296 | 1.00th=[ 93], 5.00th=[ 234], 10.00th=[ 262], 20.00th=[ 275], 00:17:08.296 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:17:08.296 | 70.00th=[ 309], 80.00th=[ 317], 90.00th=[ 338], 95.00th=[ 363], 00:17:08.296 | 99.00th=[ 405], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 447], 00:17:08.296 | 99.99th=[ 447] 00:17:08.296 bw ( KiB/s): min=45659, max=58880, per=10.85%, avg=54044.55, stdev=3630.66, samples=20 00:17:08.296 iops : min= 178, max= 230, avg=210.95, stdev=14.30, samples=20 00:17:08.296 lat (msec) : 100=1.38%, 250=5.65%, 500=92.97% 00:17:08.296 cpu : usr=0.08%, sys=1.03%, ctx=433, majf=0, minf=4097 00:17:08.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:17:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.296 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.296 job3: (groupid=0, jobs=1): err= 0: pid=87165: Tue Dec 10 10:30:41 2024 00:17:08.296 read: IOPS=149, BW=37.3MiB/s (39.1MB/s)(379MiB/10171msec) 00:17:08.296 slat (usec): min=19, max=112863, avg=6593.52, stdev=16697.74 00:17:08.296 clat (msec): min=72, max=667, avg=422.18, stdev=106.26 00:17:08.296 lat (msec): min=73, max=667, avg=428.78, stdev=107.85 00:17:08.296 clat percentiles (msec): 00:17:08.296 | 1.00th=[ 105], 5.00th=[ 249], 10.00th=[ 292], 20.00th=[ 334], 00:17:08.296 | 30.00th=[ 363], 40.00th=[ 401], 50.00th=[ 443], 60.00th=[ 468], 00:17:08.296 | 70.00th=[ 493], 80.00th=[ 518], 90.00th=[ 542], 95.00th=[ 558], 00:17:08.296 | 99.00th=[ 600], 99.50th=[ 642], 99.90th=[ 667], 99.95th=[ 667], 00:17:08.296 | 99.99th=[ 667] 00:17:08.296 bw ( KiB/s): min=27703, max=55919, per=7.47%, avg=37189.55, stdev=8474.33, samples=20 00:17:08.296 iops : min= 108, max= 218, avg=145.05, stdev=33.07, samples=20 00:17:08.296 lat (msec) : 100=0.59%, 250=4.88%, 500=67.02%, 750=27.51% 00:17:08.296 cpu : usr=0.11%, sys=0.69%, ctx=289, majf=0, minf=4097 00:17:08.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:17:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.296 issued rwts: total=1516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.296 job4: (groupid=0, jobs=1): err= 0: pid=87166: Tue Dec 10 10:30:41 2024 00:17:08.296 read: IOPS=142, BW=35.6MiB/s (37.3MB/s)(362MiB/10175msec) 00:17:08.296 slat (usec): min=22, max=126458, avg=6757.24, stdev=17793.11 00:17:08.296 clat (msec): min=26, max=649, avg=442.10, stdev=103.80 00:17:08.296 lat (msec): min=27, max=649, avg=448.86, stdev=105.05 00:17:08.296 clat percentiles (msec): 00:17:08.296 | 1.00th=[ 67], 5.00th=[ 268], 10.00th=[ 300], 20.00th=[ 376], 00:17:08.296 | 30.00th=[ 405], 40.00th=[ 426], 50.00th=[ 447], 60.00th=[ 481], 00:17:08.296 | 70.00th=[ 510], 80.00th=[ 531], 90.00th=[ 558], 95.00th=[ 575], 00:17:08.296 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 651], 99.95th=[ 651], 00:17:08.296 | 99.99th=[ 651] 00:17:08.296 bw ( KiB/s): min=22016, max=47520, per=7.11%, avg=35414.25, stdev=6545.13, samples=20 00:17:08.296 iops : min= 86, max= 185, avg=138.15, stdev=25.60, samples=20 00:17:08.296 lat (msec) : 50=0.21%, 100=1.59%, 250=2.07%, 500=63.95%, 750=32.18% 00:17:08.296 cpu : usr=0.03%, sys=0.71%, ctx=293, majf=0, minf=4097 00:17:08.296 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:17:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.296 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.296 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 job5: (groupid=0, jobs=1): err= 0: pid=87167: Tue Dec 10 10:30:41 2024 00:17:08.297 read: IOPS=139, BW=34.8MiB/s (36.5MB/s)(354MiB/10171msec) 00:17:08.297 slat (usec): min=19, max=242231, avg=7041.07, stdev=20610.24 00:17:08.297 clat (msec): min=16, max=744, avg=452.39, stdev=136.45 00:17:08.297 lat (msec): min=16, max=744, avg=459.43, stdev=137.63 00:17:08.297 clat percentiles (msec): 00:17:08.297 | 1.00th=[ 27], 5.00th=[ 54], 10.00th=[ 326], 20.00th=[ 380], 00:17:08.297 | 30.00th=[ 418], 40.00th=[ 451], 50.00th=[ 481], 60.00th=[ 506], 00:17:08.297 | 70.00th=[ 523], 80.00th=[ 550], 90.00th=[ 584], 95.00th=[ 609], 00:17:08.297 | 99.00th=[ 701], 99.50th=[ 718], 99.90th=[ 743], 99.95th=[ 743], 00:17:08.297 | 99.99th=[ 743] 00:17:08.297 bw ( KiB/s): min=23552, max=64383, per=6.94%, avg=34563.25, stdev=8341.50, samples=20 00:17:08.297 iops : min= 92, max= 251, avg=134.80, stdev=32.56, samples=20 00:17:08.297 lat (msec) : 20=0.57%, 50=4.24%, 100=1.06%, 250=1.13%, 500=51.66% 00:17:08.297 lat (msec) : 750=41.34% 00:17:08.297 cpu : usr=0.04%, sys=0.67%, ctx=267, majf=0, minf=4097 00:17:08.297 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:17:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.297 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 job6: (groupid=0, jobs=1): err= 0: pid=87168: Tue Dec 10 10:30:41 2024 00:17:08.297 read: IOPS=124, BW=31.2MiB/s (32.7MB/s)(317MiB/10167msec) 00:17:08.297 slat (usec): min=24, max=329396, avg=7941.18, stdev=24345.24 00:17:08.297 clat (msec): min=92, max=725, avg=504.29, stdev=99.99 00:17:08.297 lat (msec): min=179, max=808, avg=512.23, stdev=99.11 00:17:08.297 clat percentiles (msec): 00:17:08.297 | 1.00th=[ 279], 5.00th=[ 347], 10.00th=[ 376], 20.00th=[ 414], 00:17:08.297 | 30.00th=[ 451], 40.00th=[ 481], 50.00th=[ 506], 60.00th=[ 542], 00:17:08.297 | 70.00th=[ 558], 80.00th=[ 592], 90.00th=[ 651], 95.00th=[ 667], 00:17:08.297 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 726], 99.95th=[ 726], 00:17:08.297 | 99.99th=[ 726] 00:17:08.297 bw ( KiB/s): min= 7680, max=40448, per=6.19%, avg=30814.55, stdev=8187.98, samples=20 00:17:08.297 iops : min= 30, max= 158, avg=120.15, stdev=32.02, samples=20 00:17:08.297 lat (msec) : 100=0.08%, 250=0.79%, 500=46.69%, 750=52.44% 00:17:08.297 cpu : usr=0.04%, sys=0.63%, ctx=214, majf=0, minf=4097 00:17:08.297 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:17:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.297 issued rwts: total=1268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 job7: (groupid=0, jobs=1): err= 0: pid=87169: Tue Dec 10 10:30:41 2024 00:17:08.297 read: IOPS=139, BW=34.8MiB/s (36.5MB/s)(354MiB/10167msec) 00:17:08.297 slat (usec): min=16, max=206906, avg=7067.75, stdev=19765.15 00:17:08.297 clat (msec): min=35, max=683, avg=452.06, stdev=108.01 00:17:08.297 lat (msec): min=35, max=683, avg=459.13, stdev=108.91 00:17:08.297 clat percentiles (msec): 00:17:08.297 | 1.00th=[ 57], 5.00th=[ 243], 10.00th=[ 351], 20.00th=[ 397], 00:17:08.297 | 30.00th=[ 426], 40.00th=[ 447], 50.00th=[ 468], 60.00th=[ 481], 00:17:08.297 | 70.00th=[ 502], 80.00th=[ 527], 90.00th=[ 567], 95.00th=[ 592], 00:17:08.297 | 99.00th=[ 625], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:17:08.297 | 99.99th=[ 684] 00:17:08.297 bw ( KiB/s): min=20439, max=42496, per=6.95%, avg=34598.45, stdev=5108.10, samples=20 00:17:08.297 iops : min= 79, max= 166, avg=134.90, stdev=20.09, samples=20 00:17:08.297 lat (msec) : 50=0.92%, 100=2.05%, 250=2.05%, 500=64.73%, 750=30.25% 00:17:08.297 cpu : usr=0.06%, sys=0.66%, ctx=267, majf=0, minf=4097 00:17:08.297 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:17:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.297 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 job8: (groupid=0, jobs=1): err= 0: pid=87170: Tue Dec 10 10:30:41 2024 00:17:08.297 read: IOPS=209, BW=52.4MiB/s (54.9MB/s)(530MiB/10127msec) 00:17:08.297 slat (usec): min=20, max=127685, avg=4723.89, stdev=11972.81 00:17:08.297 clat (msec): min=95, max=467, avg=300.43, stdev=48.60 00:17:08.297 lat (msec): min=127, max=467, avg=305.15, stdev=48.66 00:17:08.297 clat percentiles (msec): 00:17:08.297 | 1.00th=[ 153], 5.00th=[ 234], 10.00th=[ 253], 20.00th=[ 268], 00:17:08.297 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 305], 00:17:08.297 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 363], 95.00th=[ 405], 00:17:08.297 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 464], 99.95th=[ 464], 00:17:08.297 | 99.99th=[ 468] 00:17:08.297 bw ( KiB/s): min=34816, max=59392, per=10.57%, avg=52622.70, stdev=6265.79, samples=20 00:17:08.297 iops : min= 136, max= 232, avg=205.40, stdev=24.51, samples=20 00:17:08.297 lat (msec) : 100=0.05%, 250=9.34%, 500=90.62% 00:17:08.297 cpu : usr=0.10%, sys=1.02%, ctx=416, majf=0, minf=4097 00:17:08.297 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:17:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.297 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 job9: (groupid=0, jobs=1): err= 0: pid=87171: Tue Dec 10 10:30:41 2024 00:17:08.297 read: IOPS=344, BW=86.2MiB/s (90.4MB/s)(878MiB/10177msec) 00:17:08.297 slat (usec): min=14, max=296190, avg=2768.12, stdev=11900.07 00:17:08.297 clat (msec): min=9, max=692, avg=182.51, stdev=202.44 00:17:08.297 lat (msec): min=9, max=693, avg=185.28, stdev=205.44 00:17:08.297 clat percentiles (msec): 00:17:08.297 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 58], 20.00th=[ 63], 00:17:08.297 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 69], 00:17:08.297 | 70.00th=[ 71], 80.00th=[ 468], 90.00th=[ 542], 95.00th=[ 575], 00:17:08.297 | 99.00th=[ 625], 99.50th=[ 642], 99.90th=[ 676], 99.95th=[ 676], 00:17:08.297 | 99.99th=[ 693] 00:17:08.297 bw ( KiB/s): min=27136, max=268288, per=17.71%, avg=88178.50, stdev=96956.94, samples=20 00:17:08.297 iops : min= 106, max= 1048, avg=344.30, stdev=378.81, samples=20 00:17:08.297 lat (msec) : 10=0.09%, 20=0.63%, 50=7.15%, 100=64.70%, 250=0.66% 00:17:08.297 lat (msec) : 500=11.37%, 750=15.41% 00:17:08.297 cpu : usr=0.24%, sys=1.42%, ctx=775, majf=0, minf=4097 00:17:08.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.297 issued rwts: total=3510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 job10: (groupid=0, jobs=1): err= 0: pid=87172: Tue Dec 10 10:30:41 2024 00:17:08.297 read: IOPS=141, BW=35.3MiB/s (37.0MB/s)(359MiB/10176msec) 00:17:08.297 slat (usec): min=19, max=209831, avg=6638.54, stdev=18536.24 00:17:08.297 clat (msec): min=15, max=666, avg=445.81, stdev=120.16 00:17:08.297 lat (msec): min=16, max=692, avg=452.45, stdev=121.51 00:17:08.297 clat percentiles (msec): 00:17:08.297 | 1.00th=[ 40], 5.00th=[ 230], 10.00th=[ 275], 20.00th=[ 376], 00:17:08.297 | 30.00th=[ 409], 40.00th=[ 435], 50.00th=[ 460], 60.00th=[ 489], 00:17:08.297 | 70.00th=[ 518], 80.00th=[ 542], 90.00th=[ 592], 95.00th=[ 609], 00:17:08.297 | 99.00th=[ 634], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 667], 00:17:08.297 | 99.99th=[ 667] 00:17:08.297 bw ( KiB/s): min=22528, max=48640, per=7.06%, avg=35151.00, stdev=6807.08, samples=20 00:17:08.297 iops : min= 88, max= 190, avg=137.10, stdev=26.63, samples=20 00:17:08.297 lat (msec) : 20=0.42%, 50=1.04%, 100=0.63%, 250=4.11%, 500=59.78% 00:17:08.297 lat (msec) : 750=34.03% 00:17:08.297 cpu : usr=0.09%, sys=0.65%, ctx=294, majf=0, minf=4097 00:17:08.297 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:17:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:08.297 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:08.297 00:17:08.297 Run status group 0 (all jobs): 00:17:08.297 READ: bw=486MiB/s (510MB/s), 31.2MiB/s-86.2MiB/s (32.7MB/s-90.4MB/s), io=4950MiB (5190MB), run=10122-10177msec 00:17:08.297 00:17:08.297 Disk stats (read/write): 00:17:08.297 nvme0n1: ios=2592/0, merge=0/0, ticks=1215635/0, in_queue=1215635, util=97.87% 00:17:08.297 nvme10n1: ios=4146/0, merge=0/0, ticks=1233238/0, in_queue=1233238, util=98.02% 00:17:08.297 nvme1n1: ios=4231/0, merge=0/0, ticks=1228846/0, in_queue=1228846, util=98.12% 00:17:08.297 nvme2n1: ios=2915/0, merge=0/0, ticks=1218479/0, in_queue=1218479, util=98.22% 00:17:08.297 nvme3n1: ios=2769/0, merge=0/0, ticks=1216839/0, in_queue=1216839, util=98.23% 00:17:08.297 nvme4n1: ios=2702/0, merge=0/0, ticks=1216901/0, in_queue=1216901, util=98.50% 00:17:08.297 nvme5n1: ios=2408/0, merge=0/0, ticks=1214412/0, in_queue=1214412, util=98.52% 00:17:08.297 nvme6n1: ios=2719/0, merge=0/0, ticks=1223491/0, in_queue=1223491, util=98.63% 00:17:08.297 nvme7n1: ios=4117/0, merge=0/0, ticks=1230402/0, in_queue=1230402, util=98.94% 00:17:08.297 nvme8n1: ios=6899/0, merge=0/0, ticks=1218817/0, in_queue=1218817, util=99.08% 00:17:08.297 nvme9n1: ios=2752/0, merge=0/0, ticks=1216192/0, in_queue=1216192, util=99.14% 00:17:08.297 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:17:08.297 [global] 00:17:08.297 thread=1 00:17:08.297 invalidate=1 00:17:08.297 rw=randwrite 00:17:08.297 time_based=1 00:17:08.297 runtime=10 00:17:08.297 ioengine=libaio 00:17:08.297 direct=1 00:17:08.297 bs=262144 00:17:08.297 iodepth=64 00:17:08.297 norandommap=1 00:17:08.297 numjobs=1 00:17:08.297 00:17:08.297 [job0] 00:17:08.297 filename=/dev/nvme0n1 00:17:08.297 [job1] 00:17:08.297 filename=/dev/nvme10n1 00:17:08.297 [job2] 00:17:08.297 filename=/dev/nvme1n1 00:17:08.297 [job3] 00:17:08.297 filename=/dev/nvme2n1 00:17:08.297 [job4] 00:17:08.297 filename=/dev/nvme3n1 00:17:08.297 [job5] 00:17:08.297 filename=/dev/nvme4n1 00:17:08.297 [job6] 00:17:08.297 filename=/dev/nvme5n1 00:17:08.297 [job7] 00:17:08.297 filename=/dev/nvme6n1 00:17:08.297 [job8] 00:17:08.297 filename=/dev/nvme7n1 00:17:08.297 [job9] 00:17:08.297 filename=/dev/nvme8n1 00:17:08.297 [job10] 00:17:08.297 filename=/dev/nvme9n1 00:17:08.297 Could not set queue depth (nvme0n1) 00:17:08.298 Could not set queue depth (nvme10n1) 00:17:08.298 Could not set queue depth (nvme1n1) 00:17:08.298 Could not set queue depth (nvme2n1) 00:17:08.298 Could not set queue depth (nvme3n1) 00:17:08.298 Could not set queue depth (nvme4n1) 00:17:08.298 Could not set queue depth (nvme5n1) 00:17:08.298 Could not set queue depth (nvme6n1) 00:17:08.298 Could not set queue depth (nvme7n1) 00:17:08.298 Could not set queue depth (nvme8n1) 00:17:08.298 Could not set queue depth (nvme9n1) 00:17:08.298 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:08.298 fio-3.35 00:17:08.298 Starting 11 threads 00:17:18.280 00:17:18.280 job0: (groupid=0, jobs=1): err= 0: pid=87373: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=258, BW=64.7MiB/s (67.9MB/s)(659MiB/10182msec); 0 zone resets 00:17:18.280 slat (usec): min=15, max=80602, avg=3791.92, stdev=6866.82 00:17:18.280 clat (msec): min=45, max=411, avg=243.29, stdev=37.55 00:17:18.280 lat (msec): min=45, max=411, avg=247.09, stdev=37.51 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 130], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 228], 00:17:18.280 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 239], 00:17:18.280 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 257], 95.00th=[ 342], 00:17:18.280 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 397], 99.95th=[ 414], 00:17:18.280 | 99.99th=[ 414] 00:17:18.280 bw ( KiB/s): min=49152, max=71680, per=7.11%, avg=65866.75, stdev=6833.12, samples=20 00:17:18.280 iops : min= 192, max= 280, avg=257.25, stdev=26.73, samples=20 00:17:18.280 lat (msec) : 50=0.15%, 100=0.61%, 250=84.64%, 500=14.61% 00:17:18.280 cpu : usr=0.37%, sys=0.56%, ctx=903, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job1: (groupid=0, jobs=1): err= 0: pid=87374: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=268, BW=67.2MiB/s (70.4MB/s)(683MiB/10170msec); 0 zone resets 00:17:18.280 slat (usec): min=19, max=71791, avg=3609.29, stdev=6739.42 00:17:18.280 clat (msec): min=14, max=404, avg=234.54, stdev=58.71 00:17:18.280 lat (msec): min=14, max=404, avg=238.15, stdev=59.28 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 45], 5.00th=[ 146], 10.00th=[ 155], 20.00th=[ 224], 00:17:18.280 | 30.00th=[ 232], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:17:18.280 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 284], 95.00th=[ 368], 00:17:18.280 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 405], 00:17:18.280 | 99.99th=[ 405] 00:17:18.280 bw ( KiB/s): min=45056, max=108032, per=7.37%, avg=68319.65, stdev=14611.38, samples=20 00:17:18.280 iops : min= 176, max= 422, avg=266.85, stdev=57.08, samples=20 00:17:18.280 lat (msec) : 20=0.15%, 50=1.02%, 100=1.46%, 250=73.02%, 500=24.34% 00:17:18.280 cpu : usr=0.52%, sys=0.83%, ctx=2911, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job2: (groupid=0, jobs=1): err= 0: pid=87386: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=250, BW=62.6MiB/s (65.6MB/s)(636MiB/10159msec); 0 zone resets 00:17:18.280 slat (usec): min=18, max=52842, avg=3877.79, stdev=6985.83 00:17:18.280 clat (msec): min=55, max=402, avg=251.79, stdev=43.15 00:17:18.280 lat (msec): min=55, max=402, avg=255.67, stdev=43.34 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 138], 5.00th=[ 222], 10.00th=[ 226], 20.00th=[ 230], 00:17:18.280 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:17:18.280 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 326], 95.00th=[ 368], 00:17:18.280 | 99.00th=[ 380], 99.50th=[ 384], 99.90th=[ 388], 99.95th=[ 401], 00:17:18.280 | 99.99th=[ 401] 00:17:18.280 bw ( KiB/s): min=43008, max=69632, per=6.85%, avg=63455.45, stdev=8350.86, samples=20 00:17:18.280 iops : min= 168, max= 272, avg=247.85, stdev=32.60, samples=20 00:17:18.280 lat (msec) : 100=0.59%, 250=71.75%, 500=27.66% 00:17:18.280 cpu : usr=0.34%, sys=0.89%, ctx=2886, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job3: (groupid=0, jobs=1): err= 0: pid=87387: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=255, BW=63.8MiB/s (66.9MB/s)(650MiB/10187msec); 0 zone resets 00:17:18.280 slat (usec): min=18, max=184657, avg=3816.06, stdev=7606.45 00:17:18.280 clat (msec): min=6, max=484, avg=246.86, stdev=41.00 00:17:18.280 lat (msec): min=6, max=484, avg=250.67, stdev=40.83 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 213], 5.00th=[ 220], 10.00th=[ 222], 20.00th=[ 230], 00:17:18.280 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 241], 00:17:18.280 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 275], 95.00th=[ 359], 00:17:18.280 | 99.00th=[ 401], 99.50th=[ 447], 99.90th=[ 477], 99.95th=[ 485], 00:17:18.280 | 99.99th=[ 485] 00:17:18.280 bw ( KiB/s): min=34304, max=71536, per=7.00%, avg=64900.70, stdev=9697.62, samples=20 00:17:18.280 iops : min= 134, max= 279, avg=253.45, stdev=37.85, samples=20 00:17:18.280 lat (msec) : 10=0.08%, 50=0.15%, 250=84.88%, 500=14.89% 00:17:18.280 cpu : usr=0.58%, sys=0.69%, ctx=3024, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job4: (groupid=0, jobs=1): err= 0: pid=87388: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=259, BW=64.8MiB/s (67.9MB/s)(660MiB/10187msec); 0 zone resets 00:17:18.280 slat (usec): min=16, max=59493, avg=3786.96, stdev=6793.04 00:17:18.280 clat (msec): min=35, max=411, avg=243.07, stdev=37.82 00:17:18.280 lat (msec): min=35, max=411, avg=246.86, stdev=37.81 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 122], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 228], 00:17:18.280 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 239], 00:17:18.280 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 268], 95.00th=[ 338], 00:17:18.280 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 414], 00:17:18.280 | 99.99th=[ 414] 00:17:18.280 bw ( KiB/s): min=49152, max=71680, per=7.12%, avg=65957.45, stdev=6699.38, samples=20 00:17:18.280 iops : min= 192, max= 280, avg=257.55, stdev=26.12, samples=20 00:17:18.280 lat (msec) : 50=0.15%, 100=0.61%, 250=84.58%, 500=14.66% 00:17:18.280 cpu : usr=0.45%, sys=0.64%, ctx=1661, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job5: (groupid=0, jobs=1): err= 0: pid=87389: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=251, BW=62.9MiB/s (66.0MB/s)(640MiB/10173msec); 0 zone resets 00:17:18.280 slat (usec): min=21, max=40454, avg=3904.91, stdev=6953.49 00:17:18.280 clat (msec): min=9, max=408, avg=250.30, stdev=48.83 00:17:18.280 lat (msec): min=9, max=408, avg=254.21, stdev=49.13 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 75], 5.00th=[ 215], 10.00th=[ 226], 20.00th=[ 230], 00:17:18.280 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:17:18.280 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 326], 95.00th=[ 372], 00:17:18.280 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 409], 99.95th=[ 409], 00:17:18.280 | 99.99th=[ 409] 00:17:18.280 bw ( KiB/s): min=40960, max=75776, per=6.90%, avg=63916.45, stdev=9152.04, samples=20 00:17:18.280 iops : min= 160, max= 296, avg=249.65, stdev=35.74, samples=20 00:17:18.280 lat (msec) : 10=0.16%, 20=0.16%, 50=0.31%, 100=0.62%, 250=71.88% 00:17:18.280 lat (msec) : 500=26.88% 00:17:18.280 cpu : usr=0.45%, sys=0.75%, ctx=2755, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job6: (groupid=0, jobs=1): err= 0: pid=87390: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=258, BW=64.6MiB/s (67.7MB/s)(658MiB/10183msec); 0 zone resets 00:17:18.280 slat (usec): min=17, max=55437, avg=3711.34, stdev=6686.03 00:17:18.280 clat (msec): min=18, max=414, avg=243.97, stdev=36.03 00:17:18.280 lat (msec): min=21, max=414, avg=247.68, stdev=35.76 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 163], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 228], 00:17:18.280 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 239], 00:17:18.280 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 275], 95.00th=[ 338], 00:17:18.280 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 414], 00:17:18.280 | 99.99th=[ 414] 00:17:18.280 bw ( KiB/s): min=47104, max=71680, per=7.09%, avg=65708.00, stdev=7348.65, samples=20 00:17:18.280 iops : min= 184, max= 280, avg=256.65, stdev=28.69, samples=20 00:17:18.280 lat (msec) : 20=0.04%, 50=0.04%, 100=0.27%, 250=84.83%, 500=14.83% 00:17:18.280 cpu : usr=0.51%, sys=0.72%, ctx=2811, majf=0, minf=1 00:17:18.280 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:18.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.280 issued rwts: total=0,2630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.280 job7: (groupid=0, jobs=1): err= 0: pid=87391: Tue Dec 10 10:30:52 2024 00:17:18.280 write: IOPS=251, BW=62.9MiB/s (65.9MB/s)(639MiB/10161msec); 0 zone resets 00:17:18.280 slat (usec): min=17, max=29667, avg=3908.39, stdev=6935.88 00:17:18.280 clat (msec): min=25, max=398, avg=250.41, stdev=46.07 00:17:18.280 lat (msec): min=25, max=398, avg=254.32, stdev=46.31 00:17:18.280 clat percentiles (msec): 00:17:18.280 | 1.00th=[ 95], 5.00th=[ 215], 10.00th=[ 226], 20.00th=[ 230], 00:17:18.280 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:17:18.280 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 321], 95.00th=[ 368], 00:17:18.280 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 397], 00:17:18.280 | 99.99th=[ 397] 00:17:18.280 bw ( KiB/s): min=43008, max=71823, per=6.89%, avg=63821.20, stdev=8623.10, samples=20 00:17:18.280 iops : min= 168, max= 280, avg=249.25, stdev=33.65, samples=20 00:17:18.280 lat (msec) : 50=0.47%, 100=0.63%, 250=71.56%, 500=27.35% 00:17:18.280 cpu : usr=0.55%, sys=0.70%, ctx=2869, majf=0, minf=1 00:17:18.281 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:18.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.281 issued rwts: total=0,2556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.281 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.281 job8: (groupid=0, jobs=1): err= 0: pid=87393: Tue Dec 10 10:30:52 2024 00:17:18.281 write: IOPS=359, BW=89.8MiB/s (94.2MB/s)(908MiB/10109msec); 0 zone resets 00:17:18.281 slat (usec): min=17, max=108731, avg=2615.50, stdev=5240.01 00:17:18.281 clat (msec): min=35, max=414, avg=175.40, stdev=57.74 00:17:18.281 lat (msec): min=37, max=414, avg=178.02, stdev=58.41 00:17:18.281 clat percentiles (msec): 00:17:18.281 | 1.00th=[ 64], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 155], 00:17:18.281 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 163], 00:17:18.281 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 230], 95.00th=[ 363], 00:17:18.281 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 414], 99.95th=[ 414], 00:17:18.281 | 99.99th=[ 414] 00:17:18.281 bw ( KiB/s): min=40960, max=102400, per=9.86%, avg=91387.40, stdev=20450.40, samples=20 00:17:18.281 iops : min= 160, max= 400, avg=356.95, stdev=79.92, samples=20 00:17:18.281 lat (msec) : 50=0.33%, 100=2.06%, 250=90.53%, 500=7.07% 00:17:18.281 cpu : usr=0.60%, sys=1.21%, ctx=2855, majf=0, minf=1 00:17:18.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:17:18.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.281 issued rwts: total=0,3633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.281 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.281 job9: (groupid=0, jobs=1): err= 0: pid=87394: Tue Dec 10 10:30:52 2024 00:17:18.281 write: IOPS=766, BW=192MiB/s (201MB/s)(1929MiB/10061msec); 0 zone resets 00:17:18.281 slat (usec): min=19, max=136597, avg=1231.49, stdev=2865.79 00:17:18.281 clat (usec): min=1823, max=420960, avg=82217.94, stdev=42750.96 00:17:18.281 lat (usec): min=1870, max=421037, avg=83449.43, stdev=43206.21 00:17:18.281 clat percentiles (msec): 00:17:18.281 | 1.00th=[ 15], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 74], 00:17:18.281 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 79], 00:17:18.281 | 70.00th=[ 80], 80.00th=[ 80], 90.00th=[ 81], 95.00th=[ 82], 00:17:18.281 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 401], 00:17:18.281 | 99.99th=[ 422] 00:17:18.281 bw ( KiB/s): min=39936, max=243200, per=21.14%, avg=195844.75, stdev=47087.61, samples=20 00:17:18.281 iops : min= 156, max= 950, avg=765.00, stdev=183.93, samples=20 00:17:18.281 lat (msec) : 2=0.01%, 4=0.13%, 10=0.57%, 20=0.83%, 50=1.76% 00:17:18.281 lat (msec) : 100=93.48%, 250=0.76%, 500=2.45% 00:17:18.281 cpu : usr=1.21%, sys=2.29%, ctx=2405, majf=0, minf=1 00:17:18.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:18.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.281 issued rwts: total=0,7714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.281 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.281 job10: (groupid=0, jobs=1): err= 0: pid=87395: Tue Dec 10 10:30:52 2024 00:17:18.281 write: IOPS=457, BW=114MiB/s (120MB/s)(1157MiB/10113msec); 0 zone resets 00:17:18.281 slat (usec): min=17, max=55305, avg=2156.64, stdev=3946.84 00:17:18.281 clat (msec): min=7, max=273, avg=137.69, stdev=38.90 00:17:18.281 lat (msec): min=7, max=273, avg=139.85, stdev=39.33 00:17:18.281 clat percentiles (msec): 00:17:18.281 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 74], 00:17:18.281 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:17:18.281 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 167], 00:17:18.281 | 99.00th=[ 171], 99.50th=[ 211], 99.90th=[ 264], 99.95th=[ 264], 00:17:18.281 | 99.99th=[ 275] 00:17:18.281 bw ( KiB/s): min=96768, max=231473, per=12.60%, avg=116779.20, stdev=37690.63, samples=20 00:17:18.281 iops : min= 378, max= 904, avg=456.10, stdev=147.22, samples=20 00:17:18.281 lat (msec) : 10=0.02%, 20=0.17%, 50=0.52%, 100=22.76%, 250=76.31% 00:17:18.281 lat (msec) : 500=0.22% 00:17:18.281 cpu : usr=0.86%, sys=1.35%, ctx=6105, majf=0, minf=1 00:17:18.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:17:18.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:18.281 issued rwts: total=0,4626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.281 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:18.281 00:17:18.281 Run status group 0 (all jobs): 00:17:18.281 WRITE: bw=905MiB/s (949MB/s), 62.6MiB/s-192MiB/s (65.6MB/s-201MB/s), io=9217MiB (9665MB), run=10061-10187msec 00:17:18.281 00:17:18.281 Disk stats (read/write): 00:17:18.281 nvme0n1: ios=49/5140, merge=0/0, ticks=74/1208042, in_queue=1208116, util=97.86% 00:17:18.281 nvme10n1: ios=49/5337, merge=0/0, ticks=58/1209177, in_queue=1209235, util=98.05% 00:17:18.281 nvme1n1: ios=42/4957, merge=0/0, ticks=35/1208131, in_queue=1208166, util=98.02% 00:17:18.281 nvme2n1: ios=26/5069, merge=0/0, ticks=23/1208004, in_queue=1208027, util=98.06% 00:17:18.281 nvme3n1: ios=25/5149, merge=0/0, ticks=44/1208635, in_queue=1208679, util=98.21% 00:17:18.281 nvme4n1: ios=0/4995, merge=0/0, ticks=0/1209337, in_queue=1209337, util=98.33% 00:17:18.281 nvme5n1: ios=0/5130, merge=0/0, ticks=0/1208459, in_queue=1208459, util=98.32% 00:17:18.281 nvme6n1: ios=0/4982, merge=0/0, ticks=0/1207934, in_queue=1207934, util=98.38% 00:17:18.281 nvme7n1: ios=0/7130, merge=0/0, ticks=0/1214052, in_queue=1214052, util=98.63% 00:17:18.281 nvme8n1: ios=0/15265, merge=0/0, ticks=0/1216432, in_queue=1216432, util=98.67% 00:17:18.281 nvme9n1: ios=0/9121, merge=0/0, ticks=0/1212131, in_queue=1212131, util=98.89% 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:18.281 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:18.281 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:18.281 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:18.281 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:17:18.281 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.281 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:17:18.282 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.282 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:17:18.282 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:17:18.282 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:17:18.282 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:17:18.282 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:18.282 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.571 rmmod nvme_tcp 00:17:18.571 rmmod nvme_fabrics 00:17:18.571 rmmod nvme_keyring 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 86706 ']' 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 86706 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 86706 ']' 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 86706 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86706 00:17:18.571 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.572 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.572 killing process with pid 86706 00:17:18.572 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86706' 00:17:18.572 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 86706 00:17:18.572 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 86706 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:18.837 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:18.837 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:17:19.096 00:17:19.096 real 0m48.795s 00:17:19.096 user 2m46.118s 00:17:19.096 sys 0m26.115s 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:19.096 ************************************ 00:17:19.096 END TEST nvmf_multiconnection 00:17:19.096 ************************************ 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.096 10:30:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.096 ************************************ 00:17:19.096 START TEST nvmf_initiator_timeout 00:17:19.096 ************************************ 00:17:19.097 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:19.356 * Looking for test storage... 00:17:19.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:19.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.357 --rc genhtml_branch_coverage=1 00:17:19.357 --rc genhtml_function_coverage=1 00:17:19.357 --rc genhtml_legend=1 00:17:19.357 --rc geninfo_all_blocks=1 00:17:19.357 --rc geninfo_unexecuted_blocks=1 00:17:19.357 00:17:19.357 ' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:19.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.357 --rc genhtml_branch_coverage=1 00:17:19.357 --rc genhtml_function_coverage=1 00:17:19.357 --rc genhtml_legend=1 00:17:19.357 --rc geninfo_all_blocks=1 00:17:19.357 --rc geninfo_unexecuted_blocks=1 00:17:19.357 00:17:19.357 ' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:19.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.357 --rc genhtml_branch_coverage=1 00:17:19.357 --rc genhtml_function_coverage=1 00:17:19.357 --rc genhtml_legend=1 00:17:19.357 --rc geninfo_all_blocks=1 00:17:19.357 --rc geninfo_unexecuted_blocks=1 00:17:19.357 00:17:19.357 ' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:19.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.357 --rc genhtml_branch_coverage=1 00:17:19.357 --rc genhtml_function_coverage=1 00:17:19.357 --rc genhtml_legend=1 00:17:19.357 --rc geninfo_all_blocks=1 00:17:19.357 --rc geninfo_unexecuted_blocks=1 00:17:19.357 00:17:19.357 ' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.357 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.357 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:19.358 Cannot find device "nvmf_init_br" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:19.358 Cannot find device "nvmf_init_br2" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:19.358 Cannot find device "nvmf_tgt_br" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.358 Cannot find device "nvmf_tgt_br2" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:19.358 Cannot find device "nvmf_init_br" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:19.358 Cannot find device "nvmf_init_br2" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:19.358 Cannot find device "nvmf_tgt_br" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:19.358 Cannot find device "nvmf_tgt_br2" 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:17:19.358 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:19.618 Cannot find device "nvmf_br" 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:19.618 Cannot find device "nvmf_init_if" 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:19.618 Cannot find device "nvmf_init_if2" 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:19.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:19.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:19.618 00:17:19.618 --- 10.0.0.3 ping statistics --- 00:17:19.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.618 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:19.618 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:19.618 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:19.618 00:17:19.618 --- 10.0.0.4 ping statistics --- 00:17:19.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.618 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:19.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:19.618 00:17:19.618 --- 10.0.0.1 ping statistics --- 00:17:19.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.618 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:19.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:19.618 00:17:19.618 --- 10.0.0.2 ping statistics --- 00:17:19.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.618 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:19.618 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=87829 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 87829 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 87829 ']' 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.877 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:19.877 [2024-12-10 10:30:54.920621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:19.877 [2024-12-10 10:30:54.920704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.877 [2024-12-10 10:30:55.064073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.137 [2024-12-10 10:30:55.105056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.137 [2024-12-10 10:30:55.105127] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.137 [2024-12-10 10:30:55.105145] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.137 [2024-12-10 10:30:55.105162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.137 [2024-12-10 10:30:55.105171] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.137 [2024-12-10 10:30:55.105300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.137 [2024-12-10 10:30:55.105555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.137 [2024-12-10 10:30:55.105927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.137 [2024-12-10 10:30:55.105936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.137 [2024-12-10 10:30:55.138831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 Malloc0 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 Delay0 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 [2024-12-10 10:30:55.282652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.137 [2024-12-10 10:30:55.310764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.137 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:20.400 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.400 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:17:20.400 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.400 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:20.400 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87886 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:17:22.304 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:17:22.304 [global] 00:17:22.304 thread=1 00:17:22.304 invalidate=1 00:17:22.304 rw=write 00:17:22.304 time_based=1 00:17:22.304 runtime=60 00:17:22.304 ioengine=libaio 00:17:22.304 direct=1 00:17:22.304 bs=4096 00:17:22.304 iodepth=1 00:17:22.304 norandommap=0 00:17:22.304 numjobs=1 00:17:22.304 00:17:22.304 verify_dump=1 00:17:22.304 verify_backlog=512 00:17:22.304 verify_state_save=0 00:17:22.304 do_verify=1 00:17:22.304 verify=crc32c-intel 00:17:22.304 [job0] 00:17:22.304 filename=/dev/nvme0n1 00:17:22.304 Could not set queue depth (nvme0n1) 00:17:22.563 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:22.563 fio-3.35 00:17:22.563 Starting 1 thread 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:25.850 true 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:25.850 true 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:25.850 true 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:25.850 true 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.850 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:28.384 true 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:28.384 true 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:28.384 true 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.384 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:17:28.385 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.385 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:28.385 true 00:17:28.385 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.385 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:17:28.385 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87886 00:18:24.626 00:18:24.626 job0: (groupid=0, jobs=1): err= 0: pid=87907: Tue Dec 10 10:31:57 2024 00:18:24.626 read: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec) 00:18:24.626 slat (usec): min=10, max=12673, avg=13.87, stdev=66.16 00:18:24.626 clat (usec): min=147, max=40654k, avg=995.94, stdev=180570.94 00:18:24.626 lat (usec): min=158, max=40654k, avg=1009.81, stdev=180570.95 00:18:24.626 clat percentiles (usec): 00:18:24.626 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:18:24.627 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:18:24.627 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 233], 00:18:24.627 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 461], 99.95th=[ 594], 00:18:24.627 | 99.99th=[ 1020] 00:18:24.627 write: IOPS=846, BW=3386KiB/s (3468kB/s)(198MiB/60000msec); 0 zone resets 00:18:24.627 slat (usec): min=12, max=585, avg=19.08, stdev= 6.75 00:18:24.627 clat (usec): min=112, max=1439, avg=151.35, stdev=25.73 00:18:24.627 lat (usec): min=127, max=1473, avg=170.43, stdev=27.44 00:18:24.627 clat percentiles (usec): 00:18:24.627 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 137], 00:18:24.627 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:18:24.627 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 186], 00:18:24.627 | 99.00th=[ 210], 99.50th=[ 233], 99.90th=[ 363], 99.95th=[ 498], 00:18:24.627 | 99.99th=[ 832] 00:18:24.627 bw ( KiB/s): min= 1400, max=12288, per=100.00%, avg=10185.77, stdev=2138.15, samples=39 00:18:24.627 iops : min= 350, max= 3072, avg=2546.44, stdev=534.53, samples=39 00:18:24.627 lat (usec) : 250=99.03%, 500=0.91%, 750=0.04%, 1000=0.01% 00:18:24.627 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:18:24.627 cpu : usr=0.60%, sys=2.17%, ctx=101499, majf=0, minf=5 00:18:24.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.627 issued rwts: total=50688,50794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.627 00:18:24.627 Run status group 0 (all jobs): 00:18:24.627 READ: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:18:24.627 WRITE: bw=3386KiB/s (3468kB/s), 3386KiB/s-3386KiB/s (3468kB/s-3468kB/s), io=198MiB (208MB), run=60000-60000msec 00:18:24.627 00:18:24.627 Disk stats (read/write): 00:18:24.627 nvme0n1: ios=50652/50688, merge=0/0, ticks=10356/8323, in_queue=18679, util=99.91% 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:18:24.627 nvmf hotplug test: fio successful as expected 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:24.627 rmmod nvme_tcp 00:18:24.627 rmmod nvme_fabrics 00:18:24.627 rmmod nvme_keyring 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 87829 ']' 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 87829 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 87829 ']' 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 87829 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.627 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87829 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.627 killing process with pid 87829 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87829' 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 87829 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 87829 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:18:24.627 00:18:24.627 real 1m4.179s 00:18:24.627 user 3m50.760s 00:18:24.627 sys 0m21.759s 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:24.627 ************************************ 00:18:24.627 END TEST nvmf_initiator_timeout 00:18:24.627 ************************************ 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:24.627 00:18:24.627 real 6m51.773s 00:18:24.627 user 17m5.150s 00:18:24.627 sys 1m53.811s 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.627 10:31:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.627 ************************************ 00:18:24.627 END TEST nvmf_target_extra 00:18:24.627 ************************************ 00:18:24.627 10:31:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:24.627 10:31:58 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:24.627 10:31:58 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.627 10:31:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.627 ************************************ 00:18:24.627 START TEST nvmf_host 00:18:24.627 ************************************ 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:24.628 * Looking for test storage... 00:18:24.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.628 --rc genhtml_branch_coverage=1 00:18:24.628 --rc genhtml_function_coverage=1 00:18:24.628 --rc genhtml_legend=1 00:18:24.628 --rc geninfo_all_blocks=1 00:18:24.628 --rc geninfo_unexecuted_blocks=1 00:18:24.628 00:18:24.628 ' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.628 --rc genhtml_branch_coverage=1 00:18:24.628 --rc genhtml_function_coverage=1 00:18:24.628 --rc genhtml_legend=1 00:18:24.628 --rc geninfo_all_blocks=1 00:18:24.628 --rc geninfo_unexecuted_blocks=1 00:18:24.628 00:18:24.628 ' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.628 --rc genhtml_branch_coverage=1 00:18:24.628 --rc genhtml_function_coverage=1 00:18:24.628 --rc genhtml_legend=1 00:18:24.628 --rc geninfo_all_blocks=1 00:18:24.628 --rc geninfo_unexecuted_blocks=1 00:18:24.628 00:18:24.628 ' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.628 --rc genhtml_branch_coverage=1 00:18:24.628 --rc genhtml_function_coverage=1 00:18:24.628 --rc genhtml_legend=1 00:18:24.628 --rc geninfo_all_blocks=1 00:18:24.628 --rc geninfo_unexecuted_blocks=1 00:18:24.628 00:18:24.628 ' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.628 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.628 ************************************ 00:18:24.628 START TEST nvmf_identify 00:18:24.628 ************************************ 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:24.628 * Looking for test storage... 00:18:24.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.628 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.629 --rc genhtml_branch_coverage=1 00:18:24.629 --rc genhtml_function_coverage=1 00:18:24.629 --rc genhtml_legend=1 00:18:24.629 --rc geninfo_all_blocks=1 00:18:24.629 --rc geninfo_unexecuted_blocks=1 00:18:24.629 00:18:24.629 ' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.629 --rc genhtml_branch_coverage=1 00:18:24.629 --rc genhtml_function_coverage=1 00:18:24.629 --rc genhtml_legend=1 00:18:24.629 --rc geninfo_all_blocks=1 00:18:24.629 --rc geninfo_unexecuted_blocks=1 00:18:24.629 00:18:24.629 ' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.629 --rc genhtml_branch_coverage=1 00:18:24.629 --rc genhtml_function_coverage=1 00:18:24.629 --rc genhtml_legend=1 00:18:24.629 --rc geninfo_all_blocks=1 00:18:24.629 --rc geninfo_unexecuted_blocks=1 00:18:24.629 00:18:24.629 ' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:24.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.629 --rc genhtml_branch_coverage=1 00:18:24.629 --rc genhtml_function_coverage=1 00:18:24.629 --rc genhtml_legend=1 00:18:24.629 --rc geninfo_all_blocks=1 00:18:24.629 --rc geninfo_unexecuted_blocks=1 00:18:24.629 00:18:24.629 ' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.629 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.629 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:24.630 Cannot find device "nvmf_init_br" 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:24.630 Cannot find device "nvmf_init_br2" 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:24.630 10:31:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:24.630 Cannot find device "nvmf_tgt_br" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.630 Cannot find device "nvmf_tgt_br2" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:24.630 Cannot find device "nvmf_init_br" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:24.630 Cannot find device "nvmf_init_br2" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:24.630 Cannot find device "nvmf_tgt_br" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:24.630 Cannot find device "nvmf_tgt_br2" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:24.630 Cannot find device "nvmf_br" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:24.630 Cannot find device "nvmf_init_if" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:24.630 Cannot find device "nvmf_init_if2" 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:24.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:24.630 00:18:24.630 --- 10.0.0.3 ping statistics --- 00:18:24.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.630 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:24.630 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:24.630 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:24.630 00:18:24.630 --- 10.0.0.4 ping statistics --- 00:18:24.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.630 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:24.630 00:18:24.630 --- 10.0.0.1 ping statistics --- 00:18:24.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.630 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:24.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:24.630 00:18:24.630 --- 10.0.0.2 ping statistics --- 00:18:24.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.630 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88830 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88830 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 88830 ']' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.630 10:31:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:24.630 [2024-12-10 10:31:59.456475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:24.631 [2024-12-10 10:31:59.456586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.631 [2024-12-10 10:31:59.596336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:24.631 [2024-12-10 10:31:59.630624] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.631 [2024-12-10 10:31:59.630668] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.631 [2024-12-10 10:31:59.630680] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.631 [2024-12-10 10:31:59.630688] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.631 [2024-12-10 10:31:59.630695] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.631 [2024-12-10 10:31:59.630844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.631 [2024-12-10 10:31:59.631677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.631 [2024-12-10 10:31:59.631759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.631 [2024-12-10 10:31:59.631765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.631 [2024-12-10 10:31:59.662794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.234 [2024-12-10 10:32:00.436959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.234 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 Malloc0 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 [2024-12-10 10:32:00.527302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 [ 00:18:25.493 { 00:18:25.493 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:25.493 "subtype": "Discovery", 00:18:25.493 "listen_addresses": [ 00:18:25.493 { 00:18:25.493 "trtype": "TCP", 00:18:25.493 "adrfam": "IPv4", 00:18:25.493 "traddr": "10.0.0.3", 00:18:25.493 "trsvcid": "4420" 00:18:25.493 } 00:18:25.493 ], 00:18:25.493 "allow_any_host": true, 00:18:25.493 "hosts": [] 00:18:25.493 }, 00:18:25.493 { 00:18:25.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.493 "subtype": "NVMe", 00:18:25.493 "listen_addresses": [ 00:18:25.493 { 00:18:25.493 "trtype": "TCP", 00:18:25.493 "adrfam": "IPv4", 00:18:25.493 "traddr": "10.0.0.3", 00:18:25.493 "trsvcid": "4420" 00:18:25.493 } 00:18:25.493 ], 00:18:25.493 "allow_any_host": true, 00:18:25.493 "hosts": [], 00:18:25.493 "serial_number": "SPDK00000000000001", 00:18:25.493 "model_number": "SPDK bdev Controller", 00:18:25.493 "max_namespaces": 32, 00:18:25.493 "min_cntlid": 1, 00:18:25.493 "max_cntlid": 65519, 00:18:25.493 "namespaces": [ 00:18:25.493 { 00:18:25.493 "nsid": 1, 00:18:25.493 "bdev_name": "Malloc0", 00:18:25.493 "name": "Malloc0", 00:18:25.493 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:25.493 "eui64": "ABCDEF0123456789", 00:18:25.493 "uuid": "98423e9e-061b-4552-ae20-1a2998a7afcd" 00:18:25.493 } 00:18:25.493 ] 00:18:25.493 } 00:18:25.493 ] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.493 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:25.493 [2024-12-10 10:32:00.577274] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:25.493 [2024-12-10 10:32:00.577312] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88865 ] 00:18:25.493 [2024-12-10 10:32:00.710656] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:25.493 [2024-12-10 10:32:00.710712] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:25.493 [2024-12-10 10:32:00.710719] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:25.493 [2024-12-10 10:32:00.710730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:25.493 [2024-12-10 10:32:00.710738] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:25.493 [2024-12-10 10:32:00.711073] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:25.493 [2024-12-10 10:32:00.711132] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9bcbd0 0 00:18:25.759 [2024-12-10 10:32:00.723499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:25.759 [2024-12-10 10:32:00.723522] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:25.759 [2024-12-10 10:32:00.723528] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:25.759 [2024-12-10 10:32:00.723531] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:25.759 [2024-12-10 10:32:00.723565] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.723572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.723591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.759 [2024-12-10 10:32:00.723632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:25.759 [2024-12-10 10:32:00.723667] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.759 [2024-12-10 10:32:00.731469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.759 [2024-12-10 10:32:00.731489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.759 [2024-12-10 10:32:00.731493] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731498] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.759 [2024-12-10 10:32:00.731512] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:25.759 [2024-12-10 10:32:00.731519] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:25.759 [2024-12-10 10:32:00.731525] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:25.759 [2024-12-10 10:32:00.731543] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.759 [2024-12-10 10:32:00.731577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.759 [2024-12-10 10:32:00.731649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.759 [2024-12-10 10:32:00.731712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.759 [2024-12-10 10:32:00.731720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.759 [2024-12-10 10:32:00.731724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.759 [2024-12-10 10:32:00.731736] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:25.759 [2024-12-10 10:32:00.731744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:25.759 [2024-12-10 10:32:00.731753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731757] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.759 [2024-12-10 10:32:00.731770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.759 [2024-12-10 10:32:00.731790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.759 [2024-12-10 10:32:00.731833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.759 [2024-12-10 10:32:00.731840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.759 [2024-12-10 10:32:00.731844] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.759 [2024-12-10 10:32:00.731855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:25.759 [2024-12-10 10:32:00.731864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:25.759 [2024-12-10 10:32:00.731872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.731881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.759 [2024-12-10 10:32:00.731888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.759 [2024-12-10 10:32:00.731907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.759 [2024-12-10 10:32:00.731975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.759 [2024-12-10 10:32:00.731996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.759 [2024-12-10 10:32:00.732000] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.732004] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.759 [2024-12-10 10:32:00.732010] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:25.759 [2024-12-10 10:32:00.732019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.732024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.759 [2024-12-10 10:32:00.732027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.759 [2024-12-10 10:32:00.732035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.759 [2024-12-10 10:32:00.732051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.759 [2024-12-10 10:32:00.732093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.759 [2024-12-10 10:32:00.732100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.759 [2024-12-10 10:32:00.732103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.760 [2024-12-10 10:32:00.732107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.760 [2024-12-10 10:32:00.732112] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:25.760 [2024-12-10 10:32:00.732117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:25.760 [2024-12-10 10:32:00.732125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:25.760 [2024-12-10 10:32:00.732230] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:25.760 [2024-12-10 10:32:00.732235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:25.760 [2024-12-10 10:32:00.732244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.760 [2024-12-10 10:32:00.732249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.760 [2024-12-10 10:32:00.732253] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.760 [2024-12-10 10:32:00.732260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.760 ===================================================== 00:18:25.760 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:25.760 ===================================================== 00:18:25.760 Controller Capabilities/Features 00:18:25.760 ================================ 00:18:25.760 Vendor ID: 0000 00:18:25.760 Subsystem Vendor ID: 0000 00:18:25.760 Serial Number: .................... 00:18:25.760 Model Number: ........................................ 00:18:25.760 Firmware Version: 24.09.1 00:18:25.760 Recommended Arb Burst: 0 00:18:25.760 IEEE OUI Identifier: 00 00 00 00:18:25.760 Multi-path I/O 00:18:25.760 May have multiple subsystem ports: No 00:18:25.760 May have multiple controllers: No 00:18:25.760 Associated with SR-IOV VF: No 00:18:25.760 Max Data Transfer Size: 131072 00:18:25.760 Max Number of Namespaces: 0 00:18:25.760 Max Number of I/O Queues: 1024 00:18:25.760 NVMe Specification Version (VS): 1.3 00:18:25.760 NVMe Specification Version (Identify): 1.3 00:18:25.760 Maximum Queue Entries: 128 00:18:25.760 Contiguous Queues Required: Yes 00:18:25.760 Arbitration Mechanisms Supported 00:18:25.760 Weighted Round Robin: Not Supported 00:18:25.760 Vendor Specific: Not Supported 00:18:25.760 Reset Timeout: 15000 ms 00:18:25.760 Doorbell Stride: 4 bytes 00:18:25.760 NVM Subsystem Reset: Not Supported 00:18:25.760 Command Sets Supported 00:18:25.760 NVM Command Set: Supported 00:18:25.760 Boot Partition: Not Supported 00:18:25.760 Memory Page Size Minimum: 4096 bytes 00:18:25.760 Memory Page Size Maximum: 4096 bytes 00:18:25.760 Persistent Memory Region: Not Supported 00:18:25.760 Optional Asynchronous Events Supported 00:18:25.760 Namespace Attribute Notices: Not Supported 00:18:25.760 Firmware Activation Notices: Not Supported 00:18:25.760 ANA Change Notices: Not Supported 00:18:25.760 PLE Aggregate Log Change Notices: Not Supported 00:18:25.760 LBA Status Info Alert Notices: Not Supported 00:18:25.760 EGE Aggregate Log Change Notices: Not Supported 00:18:25.760 Normal NVM Subsystem Shutdown event: Not Supported 00:18:25.760 Zone Descriptor Change Notices: Not Supported 00:18:25.760 Discovery Log Change Notices: Supported 00:18:25.760 Controller Attributes 00:18:25.760 128-bit Host Identifier: Not Supported 00:18:25.760 Non-Operational Permissive Mode: Not Supported 00:18:25.760 NVM Sets: Not Supported 00:18:25.760 Read Recovery Levels: Not Supported 00:18:25.760 Endurance Groups: Not Supported 00:18:25.760 Predictable Latency Mode: Not Supported 00:18:25.760 Traffic Based Keep ALive: Not Supported 00:18:25.760 Namespace Granularity: Not Supported 00:18:25.760 SQ Associations: Not Supported 00:18:25.760 UUID List: Not Supported 00:18:25.760 Multi-Domain Subsystem: Not Supported 00:18:25.760 Fixed Capacity Management: Not Supported 00:18:25.760 Variable Capacity Management: Not Supported 00:18:25.760 Delete Endurance Group: Not Supported 00:18:25.760 Delete NVM Set: Not Supported 00:18:25.760 Extended LBA Formats Supported: Not Supported 00:18:25.760 Flexible Data Placement Supported: Not Supported 00:18:25.760 00:18:25.760 Controller Memory Buffer Support 00:18:25.760 ================================ 00:18:25.760 Supported: No 00:18:25.760 00:18:25.760 Persistent Memory Region Support 00:18:25.760 ================================ 00:18:25.760 Supported: No 00:18:25.760 00:18:25.760 Admin Command Set Attributes 00:18:25.760 ============================ 00:18:25.760 Security Send/Receive: Not Supported 00:18:25.760 Format NVM: Not Supported 00:18:25.760 Firmware Activate/Download: Not Supported 00:18:25.760 Namespace Management: Not Supported 00:18:25.760 Device Self-Test: Not Supported 00:18:25.760 Directives: Not Supported 00:18:25.760 NVMe-MI: Not Supported 00:18:25.760 Virtualization Management: Not Supported 00:18:25.760 Doorbell Buffer Config: Not Supported 00:18:25.760 Get LBA Status Capability: Not Supported 00:18:25.760 Command & Feature Lockdown Capability: Not Supported 00:18:25.760 Abort Command Limit: 1 00:18:25.760 Async Event Request Limit: 4 00:18:25.760 Number of Firmware Slots: N/A 00:18:25.760 Firmware Slot 1 Read-Only: N/A 00:18:25.760 Firmware Activation Without Reset: N/A 00:18:25.760 Multiple Update Detection Support: N/A 00:18:25.760 Firmware Update Granularity: No Information Provided 00:18:25.760 Per-Namespace SMART Log: No 00:18:25.760 Asymmetric Namespace Access Log Page: Not Supported 00:18:25.760 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:25.760 Command Effects Log Page: Not Supported 00:18:25.760 Get Log Page Extended Data: Supported 00:18:25.760 Telemetry Log Pages: Not Supported 00:18:25.760 Persistent Event Log Pages: Not Supported 00:18:25.760 Supported Log Pages Log Page: May Support 00:18:25.760 Commands Supported & Effects Log Page: Not Supported 00:18:25.760 Feature Identifiers & Effects Log Page:May Support 00:18:25.760 NVMe-MI Commands & Effects Log Page: May Support 00:18:25.760 Data Area 4 for Telemetry Log: Not Supported 00:18:25.760 Error Log Page Entries Supported: 128 00:18:25.760 Keep Alive: Not Supported 00:18:25.760 00:18:25.760 NVM Command Set Attributes 00:18:25.760 ========================== 00:18:25.760 Submission Queue Entry Size 00:18:25.760 Max: 1 00:18:25.760 Min: 1 00:18:25.760 Completion Queue Entry Size 00:18:25.760 Max: 1 00:18:25.760 Min: 1 00:18:25.760 Number of Namespaces: 0 00:18:25.760 Compare Command: Not Supported 00:18:25.760 Write Uncorrectable Command: Not Supported 00:18:25.760 Dataset Management Command: Not Supported 00:18:25.760 Write Zeroes Command: Not Supported 00:18:25.760 Set Features Save Field: Not Supported 00:18:25.760 Reservations: Not Supported 00:18:25.760 Timestamp: Not Supported 00:18:25.760 Copy: Not Supported 00:18:25.760 Volatile Write Cache: Not Present 00:18:25.760 Atomic Write Unit (Normal): 1 00:18:25.760 Atomic Write Unit (PFail): 1 00:18:25.760 Atomic Compare & Write Unit: 1 00:18:25.760 Fused Compare & Write: Supported 00:18:25.760 Scatter-Gather List 00:18:25.760 SGL Command Set: Supported 00:18:25.760 SGL Keyed: Supported 00:18:25.760 SGL Bit Bucket Descriptor: Not Supported 00:18:25.760 SGL Metadata Pointer: Not Supported 00:18:25.760 Oversized SGL: Not Supported 00:18:25.760 SGL Metadata Address: Not Supported 00:18:25.760 SGL Offset: Supported 00:18:25.760 Transport SGL Data Block: Not Supported 00:18:25.760 Replay Protected Memory Block: Not Supported 00:18:25.760 00:18:25.760 Firmware Slot Information 00:18:25.760 ========================= 00:18:25.760 Active slot: 0 00:18:25.760 00:18:25.760 00:18:25.760 Error Log 00:18:25.760 ========= 00:18:25.760 00:18:25.760 Active Namespaces 00:18:25.760 ================= 00:18:25.760 Discovery Log Page 00:18:25.760 ================== 00:18:25.760 Generation Counter: 2 00:18:25.760 Number of Records: 2 00:18:25.760 Record Format: 0 00:18:25.760 00:18:25.760 Discovery Log Entry 0 00:18:25.760 ---------------------- 00:18:25.760 Transport Type: 3 (TCP) 00:18:25.760 Address Family: 1 (IPv4) 00:18:25.760 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:25.760 Entry Flags: 00:18:25.760 Duplicate Returned Information: 1 00:18:25.760 Explicit Persistent Connection Support for Discovery: 1 00:18:25.760 Transport Requirements: 00:18:25.760 Secure Channel: Not Required 00:18:25.760 Port ID: 0 (0x0000) 00:18:25.760 Controller ID: 65535 (0xffff) 00:18:25.760 Admin Max SQ Size: 128 00:18:25.760 Transport Service Identifier: 4420 00:18:25.760 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:25.760 Transport Address: 10.0.0.3 00:18:25.760 Discovery Log Entry 1 00:18:25.760 ---------------------- 00:18:25.760 Transport Type: 3 (TCP) 00:18:25.760 Address Family: 1 (IPv4) 00:18:25.761 Subsystem Type: 2 (NVM Subsystem) 00:18:25.761 Entry Flags: 00:18:25.761 Duplicate Returned Information: 0 00:18:25.761 Explicit Persistent Connection Support for Discovery: 0 00:18:25.761 Transport Requirements: 00:18:25.761 Secure Channel: Not Required 00:18:25.761 Port ID: 0 (0x0000) 00:18:25.761 Controller ID: 65535 (0xffff) 00:18:25.761 Admin Max SQ Size: 128 00:18:25.761 Transport Service Identifier: 4420 00:18:25.761 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:25.761 Transport Address: 10.0.0.3 [2024-12-10 10:32:00.732277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.761 [2024-12-10 10:32:00.732321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.761 [2024-12-10 10:32:00.732328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.761 [2024-12-10 10:32:00.732332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.761 [2024-12-10 10:32:00.732342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:25.761 [2024-12-10 10:32:00.732351] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732360] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.761 [2024-12-10 10:32:00.732384] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.761 [2024-12-10 10:32:00.732447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.761 [2024-12-10 10:32:00.732456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.761 [2024-12-10 10:32:00.732460] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.761 [2024-12-10 10:32:00.732470] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:25.761 [2024-12-10 10:32:00.732476] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:25.761 [2024-12-10 10:32:00.732484] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:25.761 [2024-12-10 10:32:00.732500] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:25.761 [2024-12-10 10:32:00.732511] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.761 [2024-12-10 10:32:00.732546] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.761 [2024-12-10 10:32:00.732634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.761 [2024-12-10 10:32:00.732641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.761 [2024-12-10 10:32:00.732645] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732649] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcbd0): datao=0, datal=4096, cccid=0 00:18:25.761 [2024-12-10 10:32:00.732655] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa030c0) on tqpair(0x9bcbd0): expected_datao=0, payload_size=4096 00:18:25.761 [2024-12-10 10:32:00.732660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732668] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732673] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.761 [2024-12-10 10:32:00.732688] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.761 [2024-12-10 10:32:00.732691] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.761 [2024-12-10 10:32:00.732706] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:25.761 [2024-12-10 10:32:00.732712] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:25.761 [2024-12-10 10:32:00.732717] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:25.761 [2024-12-10 10:32:00.732722] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:25.761 [2024-12-10 10:32:00.732727] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:25.761 [2024-12-10 10:32:00.732733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:25.761 [2024-12-10 10:32:00.732742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:25.761 [2024-12-10 10:32:00.732749] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:25.761 [2024-12-10 10:32:00.732800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.761 [2024-12-10 10:32:00.732875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.761 [2024-12-10 10:32:00.732882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.761 [2024-12-10 10:32:00.732885] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.761 [2024-12-10 10:32:00.732897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732901] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.761 [2024-12-10 10:32:00.732918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732926] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.761 [2024-12-10 10:32:00.732938] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732942] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.761 [2024-12-10 10:32:00.732958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.732966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.732971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.761 [2024-12-10 10:32:00.732978] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:25.761 [2024-12-10 10:32:00.732991] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:25.761 [2024-12-10 10:32:00.732999] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.733003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.733010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.761 [2024-12-10 10:32:00.733030] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa030c0, cid 0, qid 0 00:18:25.761 [2024-12-10 10:32:00.733037] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03240, cid 1, qid 0 00:18:25.761 [2024-12-10 10:32:00.733042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa033c0, cid 2, qid 0 00:18:25.761 [2024-12-10 10:32:00.733047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.761 [2024-12-10 10:32:00.733051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa036c0, cid 4, qid 0 00:18:25.761 [2024-12-10 10:32:00.733138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.761 [2024-12-10 10:32:00.733144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.761 [2024-12-10 10:32:00.733148] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.733152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa036c0) on tqpair=0x9bcbd0 00:18:25.761 [2024-12-10 10:32:00.733157] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:25.761 [2024-12-10 10:32:00.733163] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:25.761 [2024-12-10 10:32:00.733174] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.733178] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcbd0) 00:18:25.761 [2024-12-10 10:32:00.733185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.761 [2024-12-10 10:32:00.733202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa036c0, cid 4, qid 0 00:18:25.761 [2024-12-10 10:32:00.733260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.761 [2024-12-10 10:32:00.733267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.761 [2024-12-10 10:32:00.733271] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.761 [2024-12-10 10:32:00.733275] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcbd0): datao=0, datal=4096, cccid=4 00:18:25.762 [2024-12-10 10:32:00.733280] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa036c0) on tqpair(0x9bcbd0): expected_datao=0, payload_size=4096 00:18:25.762 [2024-12-10 10:32:00.733284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733291] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733295] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733303] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.733309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.733313] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733317] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa036c0) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.733329] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:25.762 [2024-12-10 10:32:00.733355] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.733369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.762 [2024-12-10 10:32:00.733377] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.733391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.762 [2024-12-10 10:32:00.733429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa036c0, cid 4, qid 0 00:18:25.762 [2024-12-10 10:32:00.733436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03840, cid 5, qid 0 00:18:25.762 [2024-12-10 10:32:00.733541] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.762 [2024-12-10 10:32:00.733549] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.762 [2024-12-10 10:32:00.733553] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733557] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcbd0): datao=0, datal=1024, cccid=4 00:18:25.762 [2024-12-10 10:32:00.733562] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa036c0) on tqpair(0x9bcbd0): expected_datao=0, payload_size=1024 00:18:25.762 [2024-12-10 10:32:00.733567] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733574] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733577] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733583] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.733589] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.733593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03840) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.733615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.733623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.733627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa036c0) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.733643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.733655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.762 [2024-12-10 10:32:00.733679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa036c0, cid 4, qid 0 00:18:25.762 [2024-12-10 10:32:00.733753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.762 [2024-12-10 10:32:00.733760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.762 [2024-12-10 10:32:00.733764] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733767] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcbd0): datao=0, datal=3072, cccid=4 00:18:25.762 [2024-12-10 10:32:00.733787] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa036c0) on tqpair(0x9bcbd0): expected_datao=0, payload_size=3072 00:18:25.762 [2024-12-10 10:32:00.733792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733798] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733802] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.733823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.733827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa036c0) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.733841] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733845] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.733852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.762 [2024-12-10 10:32:00.733874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa036c0, cid 4, qid 0 00:18:25.762 [2024-12-10 10:32:00.733938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.762 [2024-12-10 10:32:00.733945] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.762 [2024-12-10 10:32:00.733948] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9bcbd0): datao=0, datal=8, cccid=4 00:18:25.762 [2024-12-10 10:32:00.733957] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa036c0) on tqpair(0x9bcbd0): expected_datao=0, payload_size=8 00:18:25.762 [2024-12-10 10:32:00.733961] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733967] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733971] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.733985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.733992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.733996] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa036c0) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734084] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:25.762 [2024-12-10 10:32:00.734097] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa030c0) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.762 [2024-12-10 10:32:00.734110] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03240) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.762 [2024-12-10 10:32:00.734119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa033c0) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.762 [2024-12-10 10:32:00.734129] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.762 [2024-12-10 10:32:00.734143] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.734158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.762 [2024-12-10 10:32:00.734180] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.762 [2024-12-10 10:32:00.734222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.734229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.734233] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.734260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.762 [2024-12-10 10:32:00.734281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.762 [2024-12-10 10:32:00.734336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.734342] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.734346] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734350] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734355] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:25.762 [2024-12-10 10:32:00.734365] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:25.762 [2024-12-10 10:32:00.734376] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.734392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.762 [2024-12-10 10:32:00.734454] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.762 [2024-12-10 10:32:00.734510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.762 [2024-12-10 10:32:00.734517] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.762 [2024-12-10 10:32:00.734521] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.762 [2024-12-10 10:32:00.734537] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.762 [2024-12-10 10:32:00.734547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.762 [2024-12-10 10:32:00.734555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.734574] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.734625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.734632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.734636] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.734651] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.734667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.734685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.734735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.734742] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.734746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.734761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734784] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.734792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.734808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.734850] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.734856] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.734860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734864] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.734874] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734879] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734883] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.734890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.734906] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.734955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.734962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.734966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.734980] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734985] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.734989] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.734996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.735012] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.735052] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.735059] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.735063] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735067] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.735077] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.735093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.735109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.735153] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.735160] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.735163] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.735177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.735193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.735209] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.735250] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.735256] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.735260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.735274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735283] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.735290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.735306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.735351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.735358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.735361] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735365] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.735375] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735380] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.735384] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.735391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.739434] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.739460] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.739484] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.739488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.739492] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.739506] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.739511] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.739515] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9bcbd0) 00:18:25.763 [2024-12-10 10:32:00.739523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.763 [2024-12-10 10:32:00.739546] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03540, cid 3, qid 0 00:18:25.763 [2024-12-10 10:32:00.739592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.763 [2024-12-10 10:32:00.739626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.763 [2024-12-10 10:32:00.739645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.763 [2024-12-10 10:32:00.739650] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03540) on tqpair=0x9bcbd0 00:18:25.763 [2024-12-10 10:32:00.739659] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:18:25.763 00:18:25.763 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:25.763 [2024-12-10 10:32:00.779675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:25.763 [2024-12-10 10:32:00.779729] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88867 ] 00:18:25.763 [2024-12-10 10:32:00.918152] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:25.763 [2024-12-10 10:32:00.918216] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:25.763 [2024-12-10 10:32:00.918223] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:25.763 [2024-12-10 10:32:00.918236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:25.764 [2024-12-10 10:32:00.918245] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:25.764 [2024-12-10 10:32:00.922530] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:25.764 [2024-12-10 10:32:00.922601] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22cabd0 0 00:18:25.764 [2024-12-10 10:32:00.929479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:25.764 [2024-12-10 10:32:00.929506] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:25.764 [2024-12-10 10:32:00.929513] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:25.764 [2024-12-10 10:32:00.929518] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:25.764 [2024-12-10 10:32:00.929555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.929562] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.929567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.929581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:25.764 [2024-12-10 10:32:00.929615] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.937495] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.937518] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.937539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937544] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.937558] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:25.764 [2024-12-10 10:32:00.937566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:25.764 [2024-12-10 10:32:00.937573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:25.764 [2024-12-10 10:32:00.937590] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937596] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937600] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.937610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.937639] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.937708] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.937716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.937720] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937724] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.937730] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:25.764 [2024-12-10 10:32:00.937739] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:25.764 [2024-12-10 10:32:00.937761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937770] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.937777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.937812] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.937859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.937866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.937870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.937880] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:25.764 [2024-12-10 10:32:00.937889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:25.764 [2024-12-10 10:32:00.937897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937902] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937906] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.937913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.937932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.937975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.937982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.937986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.937990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.937996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:25.764 [2024-12-10 10:32:00.938007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.938024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.938041] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.938087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.938094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.938098] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938103] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.938108] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:25.764 [2024-12-10 10:32:00.938113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:25.764 [2024-12-10 10:32:00.938122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:25.764 [2024-12-10 10:32:00.938227] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:25.764 [2024-12-10 10:32:00.938232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:25.764 [2024-12-10 10:32:00.938242] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938246] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938250] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.938258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.938277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.938324] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.938331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.938335] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.938345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:25.764 [2024-12-10 10:32:00.938356] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938361] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.938372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.938390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.764 [2024-12-10 10:32:00.938436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.764 [2024-12-10 10:32:00.938464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.764 [2024-12-10 10:32:00.938469] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938474] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.764 [2024-12-10 10:32:00.938479] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:25.764 [2024-12-10 10:32:00.938485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:25.764 [2024-12-10 10:32:00.938494] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:25.764 [2024-12-10 10:32:00.938510] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:25.764 [2024-12-10 10:32:00.938521] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.764 [2024-12-10 10:32:00.938526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.764 [2024-12-10 10:32:00.938535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.764 [2024-12-10 10:32:00.938557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.765 [2024-12-10 10:32:00.938650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.765 [2024-12-10 10:32:00.938658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.765 [2024-12-10 10:32:00.938662] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938666] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=4096, cccid=0 00:18:25.765 [2024-12-10 10:32:00.938672] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23110c0) on tqpair(0x22cabd0): expected_datao=0, payload_size=4096 00:18:25.765 [2024-12-10 10:32:00.938677] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938685] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938690] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.765 [2024-12-10 10:32:00.938706] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.765 [2024-12-10 10:32:00.938710] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938714] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.765 [2024-12-10 10:32:00.938724] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:25.765 [2024-12-10 10:32:00.938729] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:25.765 [2024-12-10 10:32:00.938734] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:25.765 [2024-12-10 10:32:00.938739] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:25.765 [2024-12-10 10:32:00.938745] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:25.765 [2024-12-10 10:32:00.938750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.938759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.938769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.938786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:25.765 [2024-12-10 10:32:00.938807] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.765 [2024-12-10 10:32:00.938857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.765 [2024-12-10 10:32:00.938865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.765 [2024-12-10 10:32:00.938868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938873] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.765 [2024-12-10 10:32:00.938881] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938885] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.938897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.765 [2024-12-10 10:32:00.938903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938907] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938912] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.938918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.765 [2024-12-10 10:32:00.938925] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938929] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938933] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.938939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.765 [2024-12-10 10:32:00.938946] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938950] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938954] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.938960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.765 [2024-12-10 10:32:00.938966] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.938980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.938988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.938992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.939000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.765 [2024-12-10 10:32:00.939021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23110c0, cid 0, qid 0 00:18:25.765 [2024-12-10 10:32:00.939028] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311240, cid 1, qid 0 00:18:25.765 [2024-12-10 10:32:00.939033] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23113c0, cid 2, qid 0 00:18:25.765 [2024-12-10 10:32:00.939038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.765 [2024-12-10 10:32:00.939043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.765 [2024-12-10 10:32:00.939136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.765 [2024-12-10 10:32:00.939144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.765 [2024-12-10 10:32:00.939148] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.765 [2024-12-10 10:32:00.939158] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:25.765 [2024-12-10 10:32:00.939164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939184] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939192] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939196] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939200] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.939208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:25.765 [2024-12-10 10:32:00.939228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.765 [2024-12-10 10:32:00.939276] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.765 [2024-12-10 10:32:00.939283] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.765 [2024-12-10 10:32:00.939287] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939291] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.765 [2024-12-10 10:32:00.939359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939379] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939384] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.765 [2024-12-10 10:32:00.939392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.765 [2024-12-10 10:32:00.939427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.765 [2024-12-10 10:32:00.939491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.765 [2024-12-10 10:32:00.939499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.765 [2024-12-10 10:32:00.939503] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939507] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=4096, cccid=4 00:18:25.765 [2024-12-10 10:32:00.939512] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23116c0) on tqpair(0x22cabd0): expected_datao=0, payload_size=4096 00:18:25.765 [2024-12-10 10:32:00.939517] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939524] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939529] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.765 [2024-12-10 10:32:00.939544] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.765 [2024-12-10 10:32:00.939547] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.765 [2024-12-10 10:32:00.939563] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:25.765 [2024-12-10 10:32:00.939576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:25.765 [2024-12-10 10:32:00.939606] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.765 [2024-12-10 10:32:00.939612] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.939620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.939642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.766 [2024-12-10 10:32:00.939720] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.766 [2024-12-10 10:32:00.939727] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.766 [2024-12-10 10:32:00.939731] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939735] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=4096, cccid=4 00:18:25.766 [2024-12-10 10:32:00.939740] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23116c0) on tqpair(0x22cabd0): expected_datao=0, payload_size=4096 00:18:25.766 [2024-12-10 10:32:00.939745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939753] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939757] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.939772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.939776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.939797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.939809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.939818] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.939830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.939851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.766 [2024-12-10 10:32:00.939909] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.766 [2024-12-10 10:32:00.939916] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.766 [2024-12-10 10:32:00.939920] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939924] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=4096, cccid=4 00:18:25.766 [2024-12-10 10:32:00.939929] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23116c0) on tqpair(0x22cabd0): expected_datao=0, payload_size=4096 00:18:25.766 [2024-12-10 10:32:00.939934] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939941] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939946] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939954] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.939961] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.939965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.939969] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.939978] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.939987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.939998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.940005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.940011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.940016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.940022] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:25.766 [2024-12-10 10:32:00.940027] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:25.766 [2024-12-10 10:32:00.940033] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:25.766 [2024-12-10 10:32:00.940049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940054] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:25.766 [2024-12-10 10:32:00.940106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.766 [2024-12-10 10:32:00.940114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311840, cid 5, qid 0 00:18:25.766 [2024-12-10 10:32:00.940176] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.940183] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.940187] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940191] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.940198] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.940205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.940209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311840) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.940224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940254] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311840, cid 5, qid 0 00:18:25.766 [2024-12-10 10:32:00.940301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.940308] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.940312] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940316] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311840) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.940327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940331] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940356] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311840, cid 5, qid 0 00:18:25.766 [2024-12-10 10:32:00.940432] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.940442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.940445] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311840) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.940461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940493] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311840, cid 5, qid 0 00:18:25.766 [2024-12-10 10:32:00.940541] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.766 [2024-12-10 10:32:00.940548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.766 [2024-12-10 10:32:00.940552] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940556] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311840) on tqpair=0x22cabd0 00:18:25.766 [2024-12-10 10:32:00.940575] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940597] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940601] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22cabd0) 00:18:25.766 [2024-12-10 10:32:00.940627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.766 [2024-12-10 10:32:00.940638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.766 [2024-12-10 10:32:00.940642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22cabd0) 00:18:25.767 [2024-12-10 10:32:00.940649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.767 [2024-12-10 10:32:00.940670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311840, cid 5, qid 0 00:18:25.767 [2024-12-10 10:32:00.940677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23116c0, cid 4, qid 0 00:18:25.767 [2024-12-10 10:32:00.940682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23119c0, cid 6, qid 0 00:18:25.767 [2024-12-10 10:32:00.940687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311b40, cid 7, qid 0 00:18:25.767 [2024-12-10 10:32:00.940821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.767 [2024-12-10 10:32:00.940829] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.767 [2024-12-10 10:32:00.940832] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940836] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=8192, cccid=5 00:18:25.767 [2024-12-10 10:32:00.940841] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311840) on tqpair(0x22cabd0): expected_datao=0, payload_size=8192 00:18:25.767 [2024-12-10 10:32:00.940846] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940863] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940868] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.767 [2024-12-10 10:32:00.940880] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.767 [2024-12-10 10:32:00.940884] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=512, cccid=4 00:18:25.767 [2024-12-10 10:32:00.940893] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23116c0) on tqpair(0x22cabd0): expected_datao=0, payload_size=512 00:18:25.767 [2024-12-10 10:32:00.940898] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940904] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940908] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.767 [2024-12-10 10:32:00.940920] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.767 [2024-12-10 10:32:00.940924] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=512, cccid=6 00:18:25.767 [2024-12-10 10:32:00.940933] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23119c0) on tqpair(0x22cabd0): expected_datao=0, payload_size=512 00:18:25.767 [2024-12-10 10:32:00.940938] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940944] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940948] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940954] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:25.767 [2024-12-10 10:32:00.940960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:25.767 [2024-12-10 10:32:00.940964] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940968] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22cabd0): datao=0, datal=4096, cccid=7 00:18:25.767 [2024-12-10 10:32:00.940972] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2311b40) on tqpair(0x22cabd0): expected_datao=0, payload_size=4096 00:18:25.767 [2024-12-10 10:32:00.940977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940984] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940988] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.940996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.767 [2024-12-10 10:32:00.941002] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.767 [2024-12-10 10:32:00.941006] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.941010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311840) on tqpair=0x22cabd0 00:18:25.767 ===================================================== 00:18:25.767 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:25.767 ===================================================== 00:18:25.767 Controller Capabilities/Features 00:18:25.767 ================================ 00:18:25.767 Vendor ID: 8086 00:18:25.767 Subsystem Vendor ID: 8086 00:18:25.767 Serial Number: SPDK00000000000001 00:18:25.767 Model Number: SPDK bdev Controller 00:18:25.767 Firmware Version: 24.09.1 00:18:25.767 Recommended Arb Burst: 6 00:18:25.767 IEEE OUI Identifier: e4 d2 5c 00:18:25.767 Multi-path I/O 00:18:25.767 May have multiple subsystem ports: Yes 00:18:25.767 May have multiple controllers: Yes 00:18:25.767 Associated with SR-IOV VF: No 00:18:25.767 Max Data Transfer Size: 131072 00:18:25.767 Max Number of Namespaces: 32 00:18:25.767 Max Number of I/O Queues: 127 00:18:25.767 NVMe Specification Version (VS): 1.3 00:18:25.767 NVMe Specification Version (Identify): 1.3 00:18:25.767 Maximum Queue Entries: 128 00:18:25.767 Contiguous Queues Required: Yes 00:18:25.767 Arbitration Mechanisms Supported 00:18:25.767 Weighted Round Robin: Not Supported 00:18:25.767 Vendor Specific: Not Supported 00:18:25.767 Reset Timeout: 15000 ms 00:18:25.767 Doorbell Stride: 4 bytes 00:18:25.767 NVM Subsystem Reset: Not Supported 00:18:25.767 Command Sets Supported 00:18:25.767 NVM Command Set: Supported 00:18:25.767 Boot Partition: Not Supported 00:18:25.767 Memory Page Size Minimum: 4096 bytes 00:18:25.767 Memory Page Size Maximum: 4096 bytes 00:18:25.767 Persistent Memory Region: Not Supported 00:18:25.767 Optional Asynchronous Events Supported 00:18:25.767 Namespace Attribute Notices: Supported 00:18:25.767 Firmware Activation Notices: Not Supported 00:18:25.767 ANA Change Notices: Not Supported 00:18:25.767 PLE Aggregate Log Change Notices: Not Supported 00:18:25.767 LBA Status Info Alert Notices: Not Supported 00:18:25.767 EGE Aggregate Log Change Notices: Not Supported 00:18:25.767 Normal NVM Subsystem Shutdown event: Not Supported 00:18:25.767 Zone Descriptor Change Notices: Not Supported 00:18:25.767 Discovery Log Change Notices: Not Supported 00:18:25.767 Controller Attributes 00:18:25.767 128-bit Host Identifier: Supported 00:18:25.767 Non-Operational Permissive Mode: Not Supported 00:18:25.767 NVM Sets: Not Supported 00:18:25.767 Read Recovery Levels: Not Supported 00:18:25.767 Endurance Groups: Not Supported 00:18:25.767 Predictable Latency Mode: Not Supported 00:18:25.767 Traffic Based Keep ALive: Not Supported 00:18:25.767 Namespace Granularity: Not Supported 00:18:25.767 SQ Associations: Not Supported 00:18:25.767 UUID List: Not Supported 00:18:25.767 Multi-Domain Subsystem: Not Supported 00:18:25.767 Fixed Capacity Management: Not Supported 00:18:25.767 Variable Capacity Management: Not Supported 00:18:25.767 Delete Endurance Group: Not Supported 00:18:25.767 Delete NVM Set: Not Supported 00:18:25.767 Extended LBA Formats Supported: Not Supported 00:18:25.767 Flexible Data Placement Supported: Not Supported 00:18:25.767 00:18:25.767 Controller Memory Buffer Support 00:18:25.767 ================================ 00:18:25.767 Supported: No 00:18:25.767 00:18:25.767 Persistent Memory Region Support 00:18:25.767 ================================ 00:18:25.767 Supported: No 00:18:25.767 00:18:25.767 Admin Command Set Attributes 00:18:25.767 ============================ 00:18:25.767 Security Send/Receive: Not Supported 00:18:25.767 Format NVM: Not Supported 00:18:25.767 Firmware Activate/Download: Not Supported 00:18:25.767 Namespace Management: Not Supported 00:18:25.767 Device Self-Test: Not Supported 00:18:25.767 Directives: Not Supported 00:18:25.767 NVMe-MI: Not Supported 00:18:25.767 Virtualization Management: Not Supported 00:18:25.767 Doorbell Buffer Config: Not Supported 00:18:25.767 Get LBA Status Capability: Not Supported 00:18:25.767 Command & Feature Lockdown Capability: Not Supported 00:18:25.767 Abort Command Limit: 4 00:18:25.767 Async Event Request Limit: 4 00:18:25.767 Number of Firmware Slots: N/A 00:18:25.767 Firmware Slot 1 Read-Only: N/A 00:18:25.767 Firmware Activation Without Reset: [2024-12-10 10:32:00.941028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.767 [2024-12-10 10:32:00.941036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.767 [2024-12-10 10:32:00.941040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.941044] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23116c0) on tqpair=0x22cabd0 00:18:25.767 [2024-12-10 10:32:00.941056] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.767 [2024-12-10 10:32:00.941063] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.767 [2024-12-10 10:32:00.941067] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.767 [2024-12-10 10:32:00.941071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23119c0) on tqpair=0x22cabd0 00:18:25.767 [2024-12-10 10:32:00.941079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.767 [2024-12-10 10:32:00.941085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.768 [2024-12-10 10:32:00.941089] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.768 [2024-12-10 10:32:00.941093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311b40) on tqpair=0x22cabd0 00:18:25.768 N/A 00:18:25.768 Multiple Update Detection Support: N/A 00:18:25.768 Firmware Update Granularity: No Information Provided 00:18:25.768 Per-Namespace SMART Log: No 00:18:25.768 Asymmetric Namespace Access Log Page: Not Supported 00:18:25.768 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:25.768 Command Effects Log Page: Supported 00:18:25.768 Get Log Page Extended Data: Supported 00:18:25.768 Telemetry Log Pages: Not Supported 00:18:25.768 Persistent Event Log Pages: Not Supported 00:18:25.768 Supported Log Pages Log Page: May Support 00:18:25.768 Commands Supported & Effects Log Page: Not Supported 00:18:25.768 Feature Identifiers & Effects Log Page:May Support 00:18:25.768 NVMe-MI Commands & Effects Log Page: May Support 00:18:25.768 Data Area 4 for Telemetry Log: Not Supported 00:18:25.768 Error Log Page Entries Supported: 128 00:18:25.768 Keep Alive: Supported 00:18:25.768 Keep Alive Granularity: 10000 ms 00:18:25.768 00:18:25.768 NVM Command Set Attributes 00:18:25.768 ========================== 00:18:25.768 Submission Queue Entry Size 00:18:25.768 Max: 64 00:18:25.768 Min: 64 00:18:25.768 Completion Queue Entry Size 00:18:25.768 Max: 16 00:18:25.768 Min: 16 00:18:25.768 Number of Namespaces: 32 00:18:25.768 Compare Command: Supported 00:18:25.768 Write Uncorrectable Command: Not Supported 00:18:25.768 Dataset Management Command: Supported 00:18:25.768 Write Zeroes Command: Supported 00:18:25.768 Set Features Save Field: Not Supported 00:18:25.768 Reservations: Supported 00:18:25.768 Timestamp: Not Supported 00:18:25.768 Copy: Supported 00:18:25.768 Volatile Write Cache: Present 00:18:25.768 Atomic Write Unit (Normal): 1 00:18:25.768 Atomic Write Unit (PFail): 1 00:18:25.768 Atomic Compare & Write Unit: 1 00:18:25.768 Fused Compare & Write: Supported 00:18:25.768 Scatter-Gather List 00:18:25.768 SGL Command Set: Supported 00:18:25.768 SGL Keyed: Supported 00:18:25.768 SGL Bit Bucket Descriptor: Not Supported 00:18:25.768 SGL Metadata Pointer: Not Supported 00:18:25.768 Oversized SGL: Not Supported 00:18:25.768 SGL Metadata Address: Not Supported 00:18:25.768 SGL Offset: Supported 00:18:25.768 Transport SGL Data Block: Not Supported 00:18:25.768 Replay Protected Memory Block: Not Supported 00:18:25.768 00:18:25.768 Firmware Slot Information 00:18:25.768 ========================= 00:18:25.768 Active slot: 1 00:18:25.768 Slot 1 Firmware Revision: 24.09.1 00:18:25.768 00:18:25.768 00:18:25.768 Commands Supported and Effects 00:18:25.768 ============================== 00:18:25.768 Admin Commands 00:18:25.768 -------------- 00:18:25.768 Get Log Page (02h): Supported 00:18:25.768 Identify (06h): Supported 00:18:25.768 Abort (08h): Supported 00:18:25.768 Set Features (09h): Supported 00:18:25.768 Get Features (0Ah): Supported 00:18:25.768 Asynchronous Event Request (0Ch): Supported 00:18:25.768 Keep Alive (18h): Supported 00:18:25.768 I/O Commands 00:18:25.768 ------------ 00:18:25.768 Flush (00h): Supported LBA-Change 00:18:25.768 Write (01h): Supported LBA-Change 00:18:25.768 Read (02h): Supported 00:18:25.768 Compare (05h): Supported 00:18:25.768 Write Zeroes (08h): Supported LBA-Change 00:18:25.768 Dataset Management (09h): Supported LBA-Change 00:18:25.768 Copy (19h): Supported LBA-Change 00:18:25.768 00:18:25.768 Error Log 00:18:25.768 ========= 00:18:25.768 00:18:25.768 Arbitration 00:18:25.768 =========== 00:18:25.768 Arbitration Burst: 1 00:18:25.768 00:18:25.768 Power Management 00:18:25.768 ================ 00:18:25.768 Number of Power States: 1 00:18:25.768 Current Power State: Power State #0 00:18:25.768 Power State #0: 00:18:25.768 Max Power: 0.00 W 00:18:25.768 Non-Operational State: Operational 00:18:25.768 Entry Latency: Not Reported 00:18:25.768 Exit Latency: Not Reported 00:18:25.768 Relative Read Throughput: 0 00:18:25.768 Relative Read Latency: 0 00:18:25.768 Relative Write Throughput: 0 00:18:25.768 Relative Write Latency: 0 00:18:25.768 Idle Power: Not Reported 00:18:25.768 Active Power: Not Reported 00:18:25.768 Non-Operational Permissive Mode: Not Supported 00:18:25.768 00:18:25.768 Health Information 00:18:25.768 ================== 00:18:25.768 Critical Warnings: 00:18:25.768 Available Spare Space: OK 00:18:25.768 Temperature: OK 00:18:25.768 Device Reliability: OK 00:18:25.768 Read Only: No 00:18:25.768 Volatile Memory Backup: OK 00:18:25.768 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:25.768 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:25.768 Available Spare: 0% 00:18:25.768 Available Spare Threshold: 0% 00:18:25.768 Life Percentage U[2024-12-10 10:32:00.941199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.768 [2024-12-10 10:32:00.941206] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22cabd0) 00:18:25.768 [2024-12-10 10:32:00.941214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.768 [2024-12-10 10:32:00.941237] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311b40, cid 7, qid 0 00:18:25.768 [2024-12-10 10:32:00.941286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.768 [2024-12-10 10:32:00.941293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.768 [2024-12-10 10:32:00.941297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.768 [2024-12-10 10:32:00.941301] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311b40) on tqpair=0x22cabd0 00:18:25.768 [2024-12-10 10:32:00.941340] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:25.768 [2024-12-10 10:32:00.941351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23110c0) on tqpair=0x22cabd0 00:18:25.768 [2024-12-10 10:32:00.941359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.768 [2024-12-10 10:32:00.941365] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311240) on tqpair=0x22cabd0 00:18:25.768 [2024-12-10 10:32:00.941371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.768 [2024-12-10 10:32:00.941376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23113c0) on tqpair=0x22cabd0 00:18:25.768 [2024-12-10 10:32:00.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.768 [2024-12-10 10:32:00.941387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.768 [2024-12-10 10:32:00.941392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:25.768 [2024-12-10 10:32:00.945487] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.768 [2024-12-10 10:32:00.945496] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.768 [2024-12-10 10:32:00.945501] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.768 [2024-12-10 10:32:00.945510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.768 [2024-12-10 10:32:00.945539] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.768 [2024-12-10 10:32:00.945592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.768 [2024-12-10 10:32:00.945600] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.768 [2024-12-10 10:32:00.945604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.768 [2024-12-10 10:32:00.945608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.768 [2024-12-10 10:32:00.945617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.945633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.945656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.945727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.945734] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.945738] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.945747] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:25.769 [2024-12-10 10:32:00.945753] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:25.769 [2024-12-10 10:32:00.945763] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945772] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.945780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.945799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.945846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.945853] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.945857] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.945873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945878] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.945890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.945908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.945955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.945962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.945965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.945980] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.945990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.945998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946059] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946085] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946223] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946273] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946284] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946299] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946304] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946308] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946334] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946385] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946389] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946411] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946427] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946437] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946465] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946516] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946524] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946543] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946621] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946628] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946632] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946647] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946652] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946732] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946741] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946751] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946756] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946760] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946841] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946844] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.769 [2024-12-10 10:32:00.946859] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946864] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.769 [2024-12-10 10:32:00.946868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.769 [2024-12-10 10:32:00.946876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.769 [2024-12-10 10:32:00.946894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.769 [2024-12-10 10:32:00.946941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.769 [2024-12-10 10:32:00.946948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.769 [2024-12-10 10:32:00.946952] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.946957] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.946967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.946972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.946976] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.946984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947052] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947071] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947148] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947159] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947163] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947183] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947252] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947259] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947283] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947357] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947439] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947499] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947516] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947546] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947589] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947637] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947664] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947816] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947832] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947851] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.947920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.947928] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.947931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.947947] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.947956] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.947963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.947981] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.948032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.948039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.948042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.948057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.948074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.948092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.948142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.948149] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.948153] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948157] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.770 [2024-12-10 10:32:00.948168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948173] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948177] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.770 [2024-12-10 10:32:00.948185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.770 [2024-12-10 10:32:00.948202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.770 [2024-12-10 10:32:00.948246] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.770 [2024-12-10 10:32:00.948253] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.770 [2024-12-10 10:32:00.948257] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.770 [2024-12-10 10:32:00.948261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948272] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.948356] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.948364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.948368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948434] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.948485] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.948492] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.948495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948516] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948546] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.948590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.948597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.948601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948616] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948650] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.948695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.948702] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.948705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948710] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948721] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.948800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.948807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.948811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948816] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948826] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948831] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948835] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948861] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.948920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.948927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.948931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.948946] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948951] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.948955] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.948962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.948980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.949023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.949031] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.949036] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.949051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.949068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.949086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.949127] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.949142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.949147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.949163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.949180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.949199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.949241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.949248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.949252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.949267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949272] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.949284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.949302] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.949346] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.949361] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.949366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.949382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.949391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22cabd0) 00:18:25.771 [2024-12-10 10:32:00.953452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.771 [2024-12-10 10:32:00.953483] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2311540, cid 3, qid 0 00:18:25.771 [2024-12-10 10:32:00.953548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:25.771 [2024-12-10 10:32:00.953556] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:25.771 [2024-12-10 10:32:00.953560] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:25.771 [2024-12-10 10:32:00.953565] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2311540) on tqpair=0x22cabd0 00:18:25.771 [2024-12-10 10:32:00.953574] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:18:25.771 sed: 0% 00:18:25.771 Data Units Read: 0 00:18:25.771 Data Units Written: 0 00:18:25.771 Host Read Commands: 0 00:18:25.771 Host Write Commands: 0 00:18:25.771 Controller Busy Time: 0 minutes 00:18:25.771 Power Cycles: 0 00:18:25.771 Power On Hours: 0 hours 00:18:25.771 Unsafe Shutdowns: 0 00:18:25.771 Unrecoverable Media Errors: 0 00:18:25.771 Lifetime Error Log Entries: 0 00:18:25.771 Warning Temperature Time: 0 minutes 00:18:25.771 Critical Temperature Time: 0 minutes 00:18:25.771 00:18:25.771 Number of Queues 00:18:25.771 ================ 00:18:25.771 Number of I/O Submission Queues: 127 00:18:25.771 Number of I/O Completion Queues: 127 00:18:25.771 00:18:25.771 Active Namespaces 00:18:25.771 ================= 00:18:25.771 Namespace ID:1 00:18:25.772 Error Recovery Timeout: Unlimited 00:18:25.772 Command Set Identifier: NVM (00h) 00:18:25.772 Deallocate: Supported 00:18:25.772 Deallocated/Unwritten Error: Not Supported 00:18:25.772 Deallocated Read Value: Unknown 00:18:25.772 Deallocate in Write Zeroes: Not Supported 00:18:25.772 Deallocated Guard Field: 0xFFFF 00:18:25.772 Flush: Supported 00:18:25.772 Reservation: Supported 00:18:25.772 Namespace Sharing Capabilities: Multiple Controllers 00:18:25.772 Size (in LBAs): 131072 (0GiB) 00:18:25.772 Capacity (in LBAs): 131072 (0GiB) 00:18:25.772 Utilization (in LBAs): 131072 (0GiB) 00:18:25.772 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:25.772 EUI64: ABCDEF0123456789 00:18:25.772 UUID: 98423e9e-061b-4552-ae20-1a2998a7afcd 00:18:25.772 Thin Provisioning: Not Supported 00:18:25.772 Per-NS Atomic Units: Yes 00:18:25.772 Atomic Boundary Size (Normal): 0 00:18:25.772 Atomic Boundary Size (PFail): 0 00:18:25.772 Atomic Boundary Offset: 0 00:18:25.772 Maximum Single Source Range Length: 65535 00:18:25.772 Maximum Copy Length: 65535 00:18:25.772 Maximum Source Range Count: 1 00:18:25.772 NGUID/EUI64 Never Reused: No 00:18:25.772 Namespace Write Protected: No 00:18:25.772 Number of LBA Formats: 1 00:18:25.772 Current LBA Format: LBA Format #00 00:18:25.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:25.772 00:18:25.772 10:32:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:26.031 rmmod nvme_tcp 00:18:26.031 rmmod nvme_fabrics 00:18:26.031 rmmod nvme_keyring 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 88830 ']' 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 88830 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 88830 ']' 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 88830 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88830 00:18:26.031 killing process with pid 88830 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88830' 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 88830 00:18:26.031 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 88830 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.291 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:26.550 00:18:26.550 real 0m2.816s 00:18:26.550 user 0m7.034s 00:18:26.550 sys 0m0.740s 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.550 ************************************ 00:18:26.550 END TEST nvmf_identify 00:18:26.550 ************************************ 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.550 ************************************ 00:18:26.550 START TEST nvmf_perf 00:18:26.550 ************************************ 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:26.550 * Looking for test storage... 00:18:26.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.550 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.809 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.810 --rc genhtml_branch_coverage=1 00:18:26.810 --rc genhtml_function_coverage=1 00:18:26.810 --rc genhtml_legend=1 00:18:26.810 --rc geninfo_all_blocks=1 00:18:26.810 --rc geninfo_unexecuted_blocks=1 00:18:26.810 00:18:26.810 ' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.810 --rc genhtml_branch_coverage=1 00:18:26.810 --rc genhtml_function_coverage=1 00:18:26.810 --rc genhtml_legend=1 00:18:26.810 --rc geninfo_all_blocks=1 00:18:26.810 --rc geninfo_unexecuted_blocks=1 00:18:26.810 00:18:26.810 ' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.810 --rc genhtml_branch_coverage=1 00:18:26.810 --rc genhtml_function_coverage=1 00:18:26.810 --rc genhtml_legend=1 00:18:26.810 --rc geninfo_all_blocks=1 00:18:26.810 --rc geninfo_unexecuted_blocks=1 00:18:26.810 00:18:26.810 ' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.810 --rc genhtml_branch_coverage=1 00:18:26.810 --rc genhtml_function_coverage=1 00:18:26.810 --rc genhtml_legend=1 00:18:26.810 --rc geninfo_all_blocks=1 00:18:26.810 --rc geninfo_unexecuted_blocks=1 00:18:26.810 00:18:26.810 ' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.810 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:26.810 Cannot find device "nvmf_init_br" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:26.810 Cannot find device "nvmf_init_br2" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:26.810 Cannot find device "nvmf_tgt_br" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.810 Cannot find device "nvmf_tgt_br2" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:26.810 Cannot find device "nvmf_init_br" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:26.810 Cannot find device "nvmf_init_br2" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:26.810 Cannot find device "nvmf_tgt_br" 00:18:26.810 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:26.811 Cannot find device "nvmf_tgt_br2" 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:26.811 Cannot find device "nvmf_br" 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:26.811 Cannot find device "nvmf_init_if" 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:26.811 Cannot find device "nvmf_init_if2" 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:26.811 10:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.811 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:27.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:27.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:27.069 00:18:27.069 --- 10.0.0.3 ping statistics --- 00:18:27.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.069 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:27.069 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:27.069 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:27.069 00:18:27.069 --- 10.0.0.4 ping statistics --- 00:18:27.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.069 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:27.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:27.069 00:18:27.069 --- 10.0.0.1 ping statistics --- 00:18:27.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.069 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:27.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:27.069 00:18:27.069 --- 10.0.0.2 ping statistics --- 00:18:27.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.069 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:27.069 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=89096 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 89096 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 89096 ']' 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.070 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:27.328 [2024-12-10 10:32:02.339234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:27.328 [2024-12-10 10:32:02.339572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.328 [2024-12-10 10:32:02.481293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.328 [2024-12-10 10:32:02.514677] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.328 [2024-12-10 10:32:02.514741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.328 [2024-12-10 10:32:02.514751] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.328 [2024-12-10 10:32:02.514759] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.328 [2024-12-10 10:32:02.514765] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.328 [2024-12-10 10:32:02.516048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.328 [2024-12-10 10:32:02.516189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.328 [2024-12-10 10:32:02.516321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.328 [2024-12-10 10:32:02.516325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.328 [2024-12-10 10:32:02.545334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:27.587 10:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:27.845 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:27.845 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:28.411 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.668 [2024-12-10 10:32:03.845832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.668 10:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.926 10:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:28.926 10:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.184 10:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:29.184 10:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:29.751 10:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:29.751 [2024-12-10 10:32:04.915121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:29.751 10:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:30.011 10:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:30.011 10:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:30.011 10:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:30.011 10:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:31.461 Initializing NVMe Controllers 00:18:31.461 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:31.461 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:31.461 Initialization complete. Launching workers. 00:18:31.461 ======================================================== 00:18:31.461 Latency(us) 00:18:31.461 Device Information : IOPS MiB/s Average min max 00:18:31.461 PCIE (0000:00:10.0) NSID 1 from core 0: 22240.00 86.88 1438.73 375.78 8173.70 00:18:31.461 ======================================================== 00:18:31.461 Total : 22240.00 86.88 1438.73 375.78 8173.70 00:18:31.461 00:18:31.461 10:32:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.396 Initializing NVMe Controllers 00:18:32.396 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.396 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:32.396 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:32.396 Initialization complete. Launching workers. 00:18:32.396 ======================================================== 00:18:32.396 Latency(us) 00:18:32.396 Device Information : IOPS MiB/s Average min max 00:18:32.396 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3851.00 15.04 259.28 94.58 7165.04 00:18:32.396 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8046.29 7015.36 11993.07 00:18:32.396 ======================================================== 00:18:32.396 Total : 3976.00 15.53 504.09 94.58 11993.07 00:18:32.396 00:18:32.655 10:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:34.032 Initializing NVMe Controllers 00:18:34.032 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.032 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:34.032 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:34.032 Initialization complete. Launching workers. 00:18:34.032 ======================================================== 00:18:34.032 Latency(us) 00:18:34.032 Device Information : IOPS MiB/s Average min max 00:18:34.032 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9124.05 35.64 3507.35 477.47 7691.02 00:18:34.032 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4002.63 15.64 8007.02 6312.62 11678.93 00:18:34.032 ======================================================== 00:18:34.032 Total : 13126.68 51.28 4879.41 477.47 11678.93 00:18:34.032 00:18:34.032 10:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:34.032 10:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.564 Initializing NVMe Controllers 00:18:36.564 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.564 Controller IO queue size 128, less than required. 00:18:36.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:36.564 Controller IO queue size 128, less than required. 00:18:36.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:36.564 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.564 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:36.564 Initialization complete. Launching workers. 00:18:36.564 ======================================================== 00:18:36.564 Latency(us) 00:18:36.564 Device Information : IOPS MiB/s Average min max 00:18:36.564 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1857.08 464.27 69546.64 36814.63 111578.65 00:18:36.564 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 676.07 169.02 196599.54 53439.84 326088.85 00:18:36.564 ======================================================== 00:18:36.564 Total : 2533.16 633.29 103455.77 36814.63 326088.85 00:18:36.564 00:18:36.564 10:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:36.564 Initializing NVMe Controllers 00:18:36.564 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.564 Controller IO queue size 128, less than required. 00:18:36.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:36.564 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:36.564 Controller IO queue size 128, less than required. 00:18:36.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:36.564 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:36.564 WARNING: Some requested NVMe devices were skipped 00:18:36.564 No valid NVMe controllers or AIO or URING devices found 00:18:36.564 10:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:39.096 Initializing NVMe Controllers 00:18:39.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.096 Controller IO queue size 128, less than required. 00:18:39.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:39.096 Controller IO queue size 128, less than required. 00:18:39.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:39.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:39.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:39.096 Initialization complete. Launching workers. 00:18:39.096 00:18:39.096 ==================== 00:18:39.096 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:39.096 TCP transport: 00:18:39.096 polls: 9478 00:18:39.096 idle_polls: 5601 00:18:39.096 sock_completions: 3877 00:18:39.096 nvme_completions: 6685 00:18:39.096 submitted_requests: 10028 00:18:39.096 queued_requests: 1 00:18:39.096 00:18:39.096 ==================== 00:18:39.096 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:39.096 TCP transport: 00:18:39.096 polls: 9866 00:18:39.096 idle_polls: 5537 00:18:39.096 sock_completions: 4329 00:18:39.096 nvme_completions: 6665 00:18:39.096 submitted_requests: 9998 00:18:39.096 queued_requests: 1 00:18:39.096 ======================================================== 00:18:39.096 Latency(us) 00:18:39.096 Device Information : IOPS MiB/s Average min max 00:18:39.096 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1669.98 417.49 78287.99 42834.25 127498.99 00:18:39.096 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1664.98 416.24 77368.17 33551.78 134037.40 00:18:39.096 ======================================================== 00:18:39.096 Total : 3334.95 833.74 77828.77 33551.78 134037.40 00:18:39.096 00:18:39.096 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:39.096 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.664 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:39.664 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:39.664 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4cb754a8-86b1-4a29-a379-b44fd2bd075d 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4cb754a8-86b1-4a29-a379-b44fd2bd075d 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=4cb754a8-86b1-4a29-a379-b44fd2bd075d 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:39.923 10:32:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:40.181 { 00:18:40.181 "uuid": "4cb754a8-86b1-4a29-a379-b44fd2bd075d", 00:18:40.181 "name": "lvs_0", 00:18:40.181 "base_bdev": "Nvme0n1", 00:18:40.181 "total_data_clusters": 1278, 00:18:40.181 "free_clusters": 1278, 00:18:40.181 "block_size": 4096, 00:18:40.181 "cluster_size": 4194304 00:18:40.181 } 00:18:40.181 ]' 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4cb754a8-86b1-4a29-a379-b44fd2bd075d") .free_clusters' 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4cb754a8-86b1-4a29-a379-b44fd2bd075d") .cluster_size' 00:18:40.181 5112 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:40.181 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4cb754a8-86b1-4a29-a379-b44fd2bd075d lbd_0 5112 00:18:40.439 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c443eabe-6ff4-458e-b331-eca73944ceb1 00:18:40.439 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore c443eabe-6ff4-458e-b331-eca73944ceb1 lvs_n_0 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=565f110f-b085-4c1b-924d-abfa624ae4a7 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 565f110f-b085-4c1b-924d-abfa624ae4a7 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=565f110f-b085-4c1b-924d-abfa624ae4a7 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:41.007 10:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:41.267 { 00:18:41.267 "uuid": "4cb754a8-86b1-4a29-a379-b44fd2bd075d", 00:18:41.267 "name": "lvs_0", 00:18:41.267 "base_bdev": "Nvme0n1", 00:18:41.267 "total_data_clusters": 1278, 00:18:41.267 "free_clusters": 0, 00:18:41.267 "block_size": 4096, 00:18:41.267 "cluster_size": 4194304 00:18:41.267 }, 00:18:41.267 { 00:18:41.267 "uuid": "565f110f-b085-4c1b-924d-abfa624ae4a7", 00:18:41.267 "name": "lvs_n_0", 00:18:41.267 "base_bdev": "c443eabe-6ff4-458e-b331-eca73944ceb1", 00:18:41.267 "total_data_clusters": 1276, 00:18:41.267 "free_clusters": 1276, 00:18:41.267 "block_size": 4096, 00:18:41.267 "cluster_size": 4194304 00:18:41.267 } 00:18:41.267 ]' 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="565f110f-b085-4c1b-924d-abfa624ae4a7") .free_clusters' 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="565f110f-b085-4c1b-924d-abfa624ae4a7") .cluster_size' 00:18:41.267 5104 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:41.267 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 565f110f-b085-4c1b-924d-abfa624ae4a7 lbd_nest_0 5104 00:18:41.526 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6541b2e9-bf72-40fa-a3ff-a5417387b0ba 00:18:41.526 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.785 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:41.785 10:32:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6541b2e9-bf72-40fa-a3ff-a5417387b0ba 00:18:42.044 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.303 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:42.303 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:42.303 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:42.303 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:42.303 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.562 Initializing NVMe Controllers 00:18:42.562 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.562 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:42.562 WARNING: Some requested NVMe devices were skipped 00:18:42.562 No valid NVMe controllers or AIO or URING devices found 00:18:42.820 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:42.820 10:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:52.800 Initializing NVMe Controllers 00:18:52.800 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:52.800 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:52.800 Initialization complete. Launching workers. 00:18:52.800 ======================================================== 00:18:52.800 Latency(us) 00:18:52.800 Device Information : IOPS MiB/s Average min max 00:18:52.800 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 967.10 120.89 1031.96 330.81 8464.47 00:18:52.800 ======================================================== 00:18:52.800 Total : 967.10 120.89 1031.96 330.81 8464.47 00:18:52.800 00:18:53.059 10:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:53.059 10:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:53.059 10:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:53.317 Initializing NVMe Controllers 00:18:53.317 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:53.317 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:53.317 WARNING: Some requested NVMe devices were skipped 00:18:53.317 No valid NVMe controllers or AIO or URING devices found 00:18:53.317 10:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:53.317 10:32:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:05.569 Initializing NVMe Controllers 00:19:05.569 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:05.569 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:05.569 Initialization complete. Launching workers. 00:19:05.569 ======================================================== 00:19:05.569 Latency(us) 00:19:05.569 Device Information : IOPS MiB/s Average min max 00:19:05.569 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1346.72 168.34 23763.82 5446.21 61476.53 00:19:05.569 ======================================================== 00:19:05.569 Total : 1346.72 168.34 23763.82 5446.21 61476.53 00:19:05.569 00:19:05.569 10:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:05.569 10:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:05.570 10:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:05.570 Initializing NVMe Controllers 00:19:05.570 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:05.570 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:05.570 WARNING: Some requested NVMe devices were skipped 00:19:05.570 No valid NVMe controllers or AIO or URING devices found 00:19:05.570 10:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:05.570 10:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:15.549 Initializing NVMe Controllers 00:19:15.549 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.549 Controller IO queue size 128, less than required. 00:19:15.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:15.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.549 Initialization complete. Launching workers. 00:19:15.549 ======================================================== 00:19:15.549 Latency(us) 00:19:15.549 Device Information : IOPS MiB/s Average min max 00:19:15.549 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4208.06 526.01 30430.56 10714.60 60605.91 00:19:15.549 ======================================================== 00:19:15.549 Total : 4208.06 526.01 30430.56 10714.60 60605.91 00:19:15.549 00:19:15.549 10:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.549 10:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6541b2e9-bf72-40fa-a3ff-a5417387b0ba 00:19:15.549 10:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c443eabe-6ff4-458e-b331-eca73944ceb1 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.549 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.549 rmmod nvme_tcp 00:19:15.549 rmmod nvme_fabrics 00:19:15.550 rmmod nvme_keyring 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 89096 ']' 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 89096 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 89096 ']' 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 89096 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89096 00:19:15.550 killing process with pid 89096 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89096' 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 89096 00:19:15.550 10:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 89096 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:17.453 00:19:17.453 real 0m50.867s 00:19:17.453 user 3m12.058s 00:19:17.453 sys 0m11.785s 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:17.453 ************************************ 00:19:17.453 END TEST nvmf_perf 00:19:17.453 ************************************ 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:17.453 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.454 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.454 ************************************ 00:19:17.454 START TEST nvmf_fio_host 00:19:17.454 ************************************ 00:19:17.454 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:17.454 * Looking for test storage... 00:19:17.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.454 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:17.454 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:19:17.454 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.714 --rc genhtml_branch_coverage=1 00:19:17.714 --rc genhtml_function_coverage=1 00:19:17.714 --rc genhtml_legend=1 00:19:17.714 --rc geninfo_all_blocks=1 00:19:17.714 --rc geninfo_unexecuted_blocks=1 00:19:17.714 00:19:17.714 ' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.714 --rc genhtml_branch_coverage=1 00:19:17.714 --rc genhtml_function_coverage=1 00:19:17.714 --rc genhtml_legend=1 00:19:17.714 --rc geninfo_all_blocks=1 00:19:17.714 --rc geninfo_unexecuted_blocks=1 00:19:17.714 00:19:17.714 ' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.714 --rc genhtml_branch_coverage=1 00:19:17.714 --rc genhtml_function_coverage=1 00:19:17.714 --rc genhtml_legend=1 00:19:17.714 --rc geninfo_all_blocks=1 00:19:17.714 --rc geninfo_unexecuted_blocks=1 00:19:17.714 00:19:17.714 ' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.714 --rc genhtml_branch_coverage=1 00:19:17.714 --rc genhtml_function_coverage=1 00:19:17.714 --rc genhtml_legend=1 00:19:17.714 --rc geninfo_all_blocks=1 00:19:17.714 --rc geninfo_unexecuted_blocks=1 00:19:17.714 00:19:17.714 ' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.714 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.715 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:17.715 Cannot find device "nvmf_init_br" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:17.715 Cannot find device "nvmf_init_br2" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:17.715 Cannot find device "nvmf_tgt_br" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.715 Cannot find device "nvmf_tgt_br2" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:17.715 Cannot find device "nvmf_init_br" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:17.715 Cannot find device "nvmf_init_br2" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:17.715 Cannot find device "nvmf_tgt_br" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:17.715 Cannot find device "nvmf_tgt_br2" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:17.715 Cannot find device "nvmf_br" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:17.715 Cannot find device "nvmf_init_if" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:17.715 Cannot find device "nvmf_init_if2" 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.715 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.975 10:32:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:17.975 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.975 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:17.975 00:19:17.975 --- 10.0.0.3 ping statistics --- 00:19:17.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.975 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:17.975 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:17.975 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:19:17.975 00:19:17.975 --- 10.0.0.4 ping statistics --- 00:19:17.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.975 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:19:17.975 00:19:17.975 --- 10.0.0.1 ping statistics --- 00:19:17.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.975 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:17.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:19:17.975 00:19:17.975 --- 10.0.0.2 ping statistics --- 00:19:17.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.975 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89959 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89959 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 89959 ']' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.975 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.975 [2024-12-10 10:32:53.187914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:17.975 [2024-12-10 10:32:53.187998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.235 [2024-12-10 10:32:53.329380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.235 [2024-12-10 10:32:53.371111] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.235 [2024-12-10 10:32:53.371173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.235 [2024-12-10 10:32:53.371187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.235 [2024-12-10 10:32:53.371197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.235 [2024-12-10 10:32:53.371207] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.235 [2024-12-10 10:32:53.371354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.235 [2024-12-10 10:32:53.371477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.235 [2024-12-10 10:32:53.372071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.235 [2024-12-10 10:32:53.372108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.235 [2024-12-10 10:32:53.406423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.494 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.494 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:19:18.494 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.753 [2024-12-10 10:32:53.753637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.753 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:18.753 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.753 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.753 10:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:19.012 Malloc1 00:19:19.012 10:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.270 10:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:19.529 10:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:19.788 [2024-12-10 10:32:54.847636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.788 10:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:20.053 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:20.054 10:32:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:20.054 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:20.054 fio-3.35 00:19:20.054 Starting 1 thread 00:19:22.589 00:19:22.589 test: (groupid=0, jobs=1): err= 0: pid=90029: Tue Dec 10 10:32:57 2024 00:19:22.589 read: IOPS=9570, BW=37.4MiB/s (39.2MB/s)(75.0MiB/2006msec) 00:19:22.589 slat (nsec): min=1822, max=315140, avg=2298.12, stdev=3091.05 00:19:22.589 clat (usec): min=2399, max=12542, avg=6970.92, stdev=537.57 00:19:22.589 lat (usec): min=2445, max=12544, avg=6973.22, stdev=537.40 00:19:22.589 clat percentiles (usec): 00:19:22.589 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:19:22.589 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:19:22.589 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7832], 00:19:22.589 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11731], 00:19:22.589 | 99.99th=[12518] 00:19:22.589 bw ( KiB/s): min=37312, max=38968, per=99.92%, avg=38252.00, stdev=718.41, samples=4 00:19:22.589 iops : min= 9328, max= 9742, avg=9563.00, stdev=179.60, samples=4 00:19:22.589 write: IOPS=9575, BW=37.4MiB/s (39.2MB/s)(75.0MiB/2006msec); 0 zone resets 00:19:22.589 slat (nsec): min=1910, max=244659, avg=2378.28, stdev=2325.86 00:19:22.589 clat (usec): min=2269, max=11880, avg=6358.21, stdev=491.32 00:19:22.589 lat (usec): min=2281, max=11882, avg=6360.59, stdev=491.28 00:19:22.589 clat percentiles (usec): 00:19:22.589 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 5997], 00:19:22.589 | 30.00th=[ 6128], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6390], 00:19:22.589 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7177], 00:19:22.589 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9896], 99.95th=[10814], 00:19:22.589 | 99.99th=[11731] 00:19:22.589 bw ( KiB/s): min=37960, max=38528, per=100.00%, avg=38306.00, stdev=244.09, samples=4 00:19:22.589 iops : min= 9490, max= 9632, avg=9576.50, stdev=61.02, samples=4 00:19:22.589 lat (msec) : 4=0.10%, 10=99.80%, 20=0.10% 00:19:22.589 cpu : usr=71.27%, sys=21.75%, ctx=7, majf=0, minf=8 00:19:22.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:22.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:22.589 issued rwts: total=19198,19208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:22.589 00:19:22.589 Run status group 0 (all jobs): 00:19:22.589 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=75.0MiB (78.6MB), run=2006-2006msec 00:19:22.589 WRITE: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=75.0MiB (78.7MB), run=2006-2006msec 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.589 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:22.590 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:22.590 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:22.590 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:22.590 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:22.590 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:22.590 10:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:22.590 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:22.590 fio-3.35 00:19:22.590 Starting 1 thread 00:19:25.125 00:19:25.125 test: (groupid=0, jobs=1): err= 0: pid=90072: Tue Dec 10 10:33:00 2024 00:19:25.125 read: IOPS=8836, BW=138MiB/s (145MB/s)(277MiB/2007msec) 00:19:25.125 slat (usec): min=2, max=157, avg= 3.69, stdev= 2.52 00:19:25.125 clat (usec): min=1356, max=17540, avg=8148.78, stdev=2527.49 00:19:25.125 lat (usec): min=1359, max=17544, avg=8152.47, stdev=2527.68 00:19:25.125 clat percentiles (usec): 00:19:25.125 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5866], 00:19:25.125 | 30.00th=[ 6521], 40.00th=[ 7242], 50.00th=[ 7898], 60.00th=[ 8586], 00:19:25.125 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11731], 95.00th=[13042], 00:19:25.125 | 99.00th=[14615], 99.50th=[15008], 99.90th=[16712], 99.95th=[17171], 00:19:25.125 | 99.99th=[17433] 00:19:25.125 bw ( KiB/s): min=65504, max=73216, per=49.66%, avg=70208.00, stdev=3301.54, samples=4 00:19:25.125 iops : min= 4094, max= 4576, avg=4388.00, stdev=206.35, samples=4 00:19:25.125 write: IOPS=5070, BW=79.2MiB/s (83.1MB/s)(143MiB/1799msec); 0 zone resets 00:19:25.125 slat (usec): min=31, max=292, avg=36.91, stdev= 9.59 00:19:25.125 clat (usec): min=3351, max=18543, avg=11437.02, stdev=2127.43 00:19:25.125 lat (usec): min=3397, max=18590, avg=11473.93, stdev=2128.77 00:19:25.125 clat percentiles (usec): 00:19:25.125 | 1.00th=[ 6849], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9634], 00:19:25.125 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:19:25.125 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14353], 95.00th=[15139], 00:19:25.125 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:19:25.125 | 99.99th=[18482] 00:19:25.125 bw ( KiB/s): min=68736, max=75968, per=89.95%, avg=72976.00, stdev=3061.04, samples=4 00:19:25.125 iops : min= 4296, max= 4748, avg=4561.00, stdev=191.31, samples=4 00:19:25.125 lat (msec) : 2=0.03%, 4=0.78%, 10=59.98%, 20=39.21% 00:19:25.125 cpu : usr=81.90%, sys=13.36%, ctx=15, majf=0, minf=4 00:19:25.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:25.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.125 issued rwts: total=17734,9122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.125 00:19:25.125 Run status group 0 (all jobs): 00:19:25.125 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=277MiB (291MB), run=2007-2007msec 00:19:25.125 WRITE: bw=79.2MiB/s (83.1MB/s), 79.2MiB/s-79.2MiB/s (83.1MB/s-83.1MB/s), io=143MiB (149MB), run=1799-1799msec 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:19:25.125 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:25.403 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:19:25.403 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:25.403 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:19:25.661 Nvme0n1 00:19:25.661 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d57cb972-1c5b-4170-a532-54571aa55cff 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d57cb972-1c5b-4170-a532-54571aa55cff 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d57cb972-1c5b-4170-a532-54571aa55cff 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:19:25.919 10:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:19:26.178 { 00:19:26.178 "uuid": "d57cb972-1c5b-4170-a532-54571aa55cff", 00:19:26.178 "name": "lvs_0", 00:19:26.178 "base_bdev": "Nvme0n1", 00:19:26.178 "total_data_clusters": 4, 00:19:26.178 "free_clusters": 4, 00:19:26.178 "block_size": 4096, 00:19:26.178 "cluster_size": 1073741824 00:19:26.178 } 00:19:26.178 ]' 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d57cb972-1c5b-4170-a532-54571aa55cff") .free_clusters' 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d57cb972-1c5b-4170-a532-54571aa55cff") .cluster_size' 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:19:26.178 4096 00:19:26.178 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:19:26.436 0005e1eb-f71e-4f64-8895-831f4c1f0e9e 00:19:26.436 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:19:26.694 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:19:26.953 10:33:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:27.210 10:33:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:27.210 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:27.210 fio-3.35 00:19:27.210 Starting 1 thread 00:19:29.742 00:19:29.742 test: (groupid=0, jobs=1): err= 0: pid=90181: Tue Dec 10 10:33:04 2024 00:19:29.742 read: IOPS=5911, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec) 00:19:29.742 slat (nsec): min=1871, max=234166, avg=2669.94, stdev=3157.12 00:19:29.742 clat (usec): min=3060, max=20867, avg=11320.74, stdev=934.79 00:19:29.742 lat (usec): min=3067, max=20869, avg=11323.41, stdev=934.59 00:19:29.742 clat percentiles (usec): 00:19:29.742 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:19:29.742 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:19:29.742 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:19:29.742 | 99.00th=[13304], 99.50th=[13566], 99.90th=[18220], 99.95th=[19792], 00:19:29.742 | 99.99th=[20841] 00:19:29.742 bw ( KiB/s): min=22672, max=24112, per=99.93%, avg=23628.00, stdev=648.26, samples=4 00:19:29.742 iops : min= 5668, max= 6028, avg=5907.00, stdev=162.07, samples=4 00:19:29.742 write: IOPS=5910, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec); 0 zone resets 00:19:29.742 slat (nsec): min=1994, max=181896, avg=2809.62, stdev=2543.63 00:19:29.742 clat (usec): min=1921, max=18337, avg=10256.09, stdev=882.52 00:19:29.742 lat (usec): min=1931, max=18339, avg=10258.90, stdev=882.43 00:19:29.742 clat percentiles (usec): 00:19:29.742 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:19:29.742 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:19:29.742 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:19:29.742 | 99.00th=[12125], 99.50th=[12649], 99.90th=[16909], 99.95th=[17171], 00:19:29.742 | 99.99th=[18220] 00:19:29.742 bw ( KiB/s): min=23496, max=23768, per=99.90%, avg=23618.00, stdev=119.98, samples=4 00:19:29.742 iops : min= 5874, max= 5942, avg=5904.50, stdev=29.99, samples=4 00:19:29.742 lat (msec) : 2=0.01%, 4=0.05%, 10=21.33%, 20=78.60%, 50=0.01% 00:19:29.742 cpu : usr=73.90%, sys=20.72%, ctx=10, majf=0, minf=8 00:19:29.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:29.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.742 issued rwts: total=11876,11874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.742 00:19:29.742 Run status group 0 (all jobs): 00:19:29.742 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2009-2009msec 00:19:29.742 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2009-2009msec 00:19:29.742 10:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:30.001 10:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=093c2c62-825d-4e8c-8eef-7b268da08634 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 093c2c62-825d-4e8c-8eef-7b268da08634 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=093c2c62-825d-4e8c-8eef-7b268da08634 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:19:30.001 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:30.567 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:19:30.567 { 00:19:30.567 "uuid": "d57cb972-1c5b-4170-a532-54571aa55cff", 00:19:30.567 "name": "lvs_0", 00:19:30.567 "base_bdev": "Nvme0n1", 00:19:30.567 "total_data_clusters": 4, 00:19:30.567 "free_clusters": 0, 00:19:30.567 "block_size": 4096, 00:19:30.567 "cluster_size": 1073741824 00:19:30.567 }, 00:19:30.567 { 00:19:30.567 "uuid": "093c2c62-825d-4e8c-8eef-7b268da08634", 00:19:30.567 "name": "lvs_n_0", 00:19:30.567 "base_bdev": "0005e1eb-f71e-4f64-8895-831f4c1f0e9e", 00:19:30.567 "total_data_clusters": 1022, 00:19:30.567 "free_clusters": 1022, 00:19:30.567 "block_size": 4096, 00:19:30.567 "cluster_size": 4194304 00:19:30.567 } 00:19:30.567 ]' 00:19:30.567 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="093c2c62-825d-4e8c-8eef-7b268da08634") .free_clusters' 00:19:30.567 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:19:30.568 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="093c2c62-825d-4e8c-8eef-7b268da08634") .cluster_size' 00:19:30.568 4088 00:19:30.568 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:19:30.568 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:19:30.568 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:19:30.568 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:30.826 f6fbc1be-3fb2-46b0-89a5-5db2004396a6 00:19:30.826 10:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:31.084 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:31.343 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:31.602 10:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:31.602 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:31.602 fio-3.35 00:19:31.602 Starting 1 thread 00:19:34.132 00:19:34.132 test: (groupid=0, jobs=1): err= 0: pid=90260: Tue Dec 10 10:33:09 2024 00:19:34.132 read: IOPS=5808, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2009msec) 00:19:34.132 slat (nsec): min=1843, max=355587, avg=2706.08, stdev=4743.93 00:19:34.132 clat (usec): min=3316, max=20834, avg=11540.67, stdev=948.86 00:19:34.132 lat (usec): min=3325, max=20836, avg=11543.38, stdev=948.46 00:19:34.132 clat percentiles (usec): 00:19:34.132 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:19:34.132 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:19:34.132 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:19:34.132 | 99.00th=[13566], 99.50th=[13960], 99.90th=[17695], 99.95th=[19006], 00:19:34.132 | 99.99th=[20841] 00:19:34.132 bw ( KiB/s): min=22403, max=23560, per=99.89%, avg=23210.75, stdev=549.13, samples=4 00:19:34.132 iops : min= 5600, max= 5890, avg=5802.50, stdev=137.65, samples=4 00:19:34.132 write: IOPS=5796, BW=22.6MiB/s (23.7MB/s)(45.5MiB/2009msec); 0 zone resets 00:19:34.132 slat (nsec): min=1932, max=267886, avg=2797.73, stdev=3302.81 00:19:34.132 clat (usec): min=2555, max=19112, avg=10438.62, stdev=894.40 00:19:34.132 lat (usec): min=2568, max=19114, avg=10441.42, stdev=894.22 00:19:34.132 clat percentiles (usec): 00:19:34.132 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9765], 00:19:34.132 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:19:34.132 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:19:34.132 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17433], 99.95th=[17695], 00:19:34.132 | 99.99th=[19006] 00:19:34.132 bw ( KiB/s): min=23072, max=23257, per=99.81%, avg=23142.25, stdev=88.88, samples=4 00:19:34.132 iops : min= 5768, max= 5814, avg=5785.50, stdev=22.11, samples=4 00:19:34.132 lat (msec) : 4=0.06%, 10=16.59%, 20=83.34%, 50=0.02% 00:19:34.132 cpu : usr=75.85%, sys=19.52%, ctx=6, majf=0, minf=8 00:19:34.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:34.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.132 issued rwts: total=11670,11645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.132 00:19:34.132 Run status group 0 (all jobs): 00:19:34.132 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2009-2009msec 00:19:34.132 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.5MiB (47.7MB), run=2009-2009msec 00:19:34.132 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:34.132 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:34.390 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:34.647 10:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:34.905 10:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:35.163 10:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:35.420 10:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.354 rmmod nvme_tcp 00:19:36.354 rmmod nvme_fabrics 00:19:36.354 rmmod nvme_keyring 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 89959 ']' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 89959 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 89959 ']' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 89959 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89959 00:19:36.354 killing process with pid 89959 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89959' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 89959 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 89959 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:36.354 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:36.612 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:36.612 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:36.612 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.612 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:36.612 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:36.612 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:36.613 ************************************ 00:19:36.613 END TEST nvmf_fio_host 00:19:36.613 ************************************ 00:19:36.613 00:19:36.613 real 0m19.265s 00:19:36.613 user 1m24.440s 00:19:36.613 sys 0m4.316s 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:36.613 10:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.873 ************************************ 00:19:36.873 START TEST nvmf_failover 00:19:36.873 ************************************ 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:36.873 * Looking for test storage... 00:19:36.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:19:36.873 10:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:36.873 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:36.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.874 --rc genhtml_branch_coverage=1 00:19:36.874 --rc genhtml_function_coverage=1 00:19:36.874 --rc genhtml_legend=1 00:19:36.874 --rc geninfo_all_blocks=1 00:19:36.874 --rc geninfo_unexecuted_blocks=1 00:19:36.874 00:19:36.874 ' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:36.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.874 --rc genhtml_branch_coverage=1 00:19:36.874 --rc genhtml_function_coverage=1 00:19:36.874 --rc genhtml_legend=1 00:19:36.874 --rc geninfo_all_blocks=1 00:19:36.874 --rc geninfo_unexecuted_blocks=1 00:19:36.874 00:19:36.874 ' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:36.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.874 --rc genhtml_branch_coverage=1 00:19:36.874 --rc genhtml_function_coverage=1 00:19:36.874 --rc genhtml_legend=1 00:19:36.874 --rc geninfo_all_blocks=1 00:19:36.874 --rc geninfo_unexecuted_blocks=1 00:19:36.874 00:19:36.874 ' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:36.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.874 --rc genhtml_branch_coverage=1 00:19:36.874 --rc genhtml_function_coverage=1 00:19:36.874 --rc genhtml_legend=1 00:19:36.874 --rc geninfo_all_blocks=1 00:19:36.874 --rc geninfo_unexecuted_blocks=1 00:19:36.874 00:19:36.874 ' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.874 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:36.874 Cannot find device "nvmf_init_br" 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:36.874 Cannot find device "nvmf_init_br2" 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:36.874 Cannot find device "nvmf_tgt_br" 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:36.874 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.134 Cannot find device "nvmf_tgt_br2" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:37.134 Cannot find device "nvmf_init_br" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:37.134 Cannot find device "nvmf_init_br2" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:37.134 Cannot find device "nvmf_tgt_br" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:37.134 Cannot find device "nvmf_tgt_br2" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:37.134 Cannot find device "nvmf_br" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:37.134 Cannot find device "nvmf_init_if" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:37.134 Cannot find device "nvmf_init_if2" 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.134 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:37.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:19:37.393 00:19:37.393 --- 10.0.0.3 ping statistics --- 00:19:37.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.393 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:37.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:37.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:19:37.393 00:19:37.393 --- 10.0.0.4 ping statistics --- 00:19:37.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.393 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:19:37.393 00:19:37.393 --- 10.0.0.1 ping statistics --- 00:19:37.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.393 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:37.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:37.393 00:19:37.393 --- 10.0.0.2 ping statistics --- 00:19:37.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.393 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=90547 00:19:37.393 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 90547 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90547 ']' 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.394 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:37.394 [2024-12-10 10:33:12.498042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:37.394 [2024-12-10 10:33:12.498157] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.652 [2024-12-10 10:33:12.632931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:37.652 [2024-12-10 10:33:12.667648] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.652 [2024-12-10 10:33:12.667857] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.652 [2024-12-10 10:33:12.667961] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.652 [2024-12-10 10:33:12.668061] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.652 [2024-12-10 10:33:12.668146] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.652 [2024-12-10 10:33:12.668358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.652 [2024-12-10 10:33:12.668923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.652 [2024-12-10 10:33:12.668953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.652 [2024-12-10 10:33:12.697139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:37.652 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.652 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:37.652 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:37.652 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.652 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:37.653 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.653 10:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.912 [2024-12-10 10:33:13.097311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.912 10:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:38.171 Malloc0 00:19:38.429 10:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.688 10:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.946 10:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:39.205 [2024-12-10 10:33:14.247746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:39.205 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:39.464 [2024-12-10 10:33:14.475835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:39.464 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:39.723 [2024-12-10 10:33:14.700119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90603 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90603 /var/tmp/bdevperf.sock 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90603 ']' 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.723 10:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:39.982 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.982 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:39.982 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:40.241 NVMe0n1 00:19:40.241 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:40.499 00:19:40.499 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.499 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90619 00:19:40.499 10:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:41.435 10:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:41.694 10:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:44.980 10:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:45.239 00:19:45.239 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:45.498 10:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:48.786 10:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:48.786 [2024-12-10 10:33:23.779293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:48.786 10:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:49.722 10:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:49.985 10:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90619 00:19:56.555 { 00:19:56.555 "results": [ 00:19:56.555 { 00:19:56.555 "job": "NVMe0n1", 00:19:56.555 "core_mask": "0x1", 00:19:56.555 "workload": "verify", 00:19:56.555 "status": "finished", 00:19:56.555 "verify_range": { 00:19:56.555 "start": 0, 00:19:56.555 "length": 16384 00:19:56.555 }, 00:19:56.555 "queue_depth": 128, 00:19:56.555 "io_size": 4096, 00:19:56.555 "runtime": 15.008497, 00:19:56.555 "iops": 10048.241339555852, 00:19:56.555 "mibps": 39.25094273264005, 00:19:56.555 "io_failed": 3173, 00:19:56.555 "io_timeout": 0, 00:19:56.555 "avg_latency_us": 12447.461265578859, 00:19:56.555 "min_latency_us": 536.2036363636364, 00:19:56.555 "max_latency_us": 13702.981818181817 00:19:56.555 } 00:19:56.555 ], 00:19:56.556 "core_count": 1 00:19:56.556 } 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90603 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90603 ']' 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90603 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90603 00:19:56.556 killing process with pid 90603 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90603' 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90603 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90603 00:19:56.556 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:56.556 [2024-12-10 10:33:14.757688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:56.556 [2024-12-10 10:33:14.757781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90603 ] 00:19:56.556 [2024-12-10 10:33:14.888290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.556 [2024-12-10 10:33:14.930135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.556 [2024-12-10 10:33:14.964394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:56.556 Running I/O for 15 seconds... 00:19:56.556 7828.00 IOPS, 30.58 MiB/s [2024-12-10T10:33:31.783Z] [2024-12-10 10:33:16.902776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.902842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.902914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.902927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.902940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.902952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.902966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.902978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.902991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.556 [2024-12-10 10:33:16.903830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.556 [2024-12-10 10:33:16.903846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.903861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.903878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.903893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.903909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.903922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.903967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.903995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.904927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.557 [2024-12-10 10:33:16.904979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.904992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.557 [2024-12-10 10:33:16.905004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.557 [2024-12-10 10:33:16.905018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.558 [2024-12-10 10:33:16.905210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.905986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.905998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.906013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.906025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.906039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.906051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.906065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.906077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.906091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.906109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.906123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.906136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.558 [2024-12-10 10:33:16.906152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.558 [2024-12-10 10:33:16.906164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:16.906548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac8090 is same with the state(6) to be set 00:19:56.559 [2024-12-10 10:33:16.906577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.559 [2024-12-10 10:33:16.906587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.559 [2024-12-10 10:33:16.906597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72400 len:8 PRP1 0x0 PRP2 0x0 00:19:56.559 [2024-12-10 10:33:16.906612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906656] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xac8090 was disconnected and freed. reset controller. 00:19:56.559 [2024-12-10 10:33:16.906672] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:56.559 [2024-12-10 10:33:16.906725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.559 [2024-12-10 10:33:16.906745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.559 [2024-12-10 10:33:16.906773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.559 [2024-12-10 10:33:16.906799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.559 [2024-12-10 10:33:16.906839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:16.906852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.559 [2024-12-10 10:33:16.910480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.559 [2024-12-10 10:33:16.910516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6cc0 (9): Bad file descriptor 00:19:56.559 [2024-12-10 10:33:16.941884] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.559 8694.00 IOPS, 33.96 MiB/s [2024-12-10T10:33:31.786Z] 9284.00 IOPS, 36.27 MiB/s [2024-12-10T10:33:31.786Z] 9599.00 IOPS, 37.50 MiB/s [2024-12-10T10:33:31.786Z] [2024-12-10 10:33:20.492553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.492979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.492992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.493012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.493026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.493038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.493052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.559 [2024-12-10 10:33:20.493064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.493078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.559 [2024-12-10 10:33:20.493090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.559 [2024-12-10 10:33:20.493103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.560 [2024-12-10 10:33:20.493963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.493976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.493988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.494002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.494014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.494028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.494040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.560 [2024-12-10 10:33:20.494059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.560 [2024-12-10 10:33:20.494072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.561 [2024-12-10 10:33:20.494852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.494982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.494995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.561 [2024-12-10 10:33:20.495167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.561 [2024-12-10 10:33:20.495181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.562 [2024-12-10 10:33:20.495193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.562 [2024-12-10 10:33:20.495219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.562 [2024-12-10 10:33:20.495244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.562 [2024-12-10 10:33:20.495270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.562 [2024-12-10 10:33:20.495776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac8d80 is same with the state(6) to be set 00:19:56.562 [2024-12-10 10:33:20.495808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.495819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.495830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114248 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.495853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.495907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.495942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114704 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.495983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.495992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114712 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114720 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114728 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114736 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114744 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114752 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114760 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114768 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114776 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114784 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.562 [2024-12-10 10:33:20.496414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.562 [2024-12-10 10:33:20.496423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.562 [2024-12-10 10:33:20.496432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114792 len:8 PRP1 0x0 PRP2 0x0 00:19:56.562 [2024-12-10 10:33:20.496448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.563 [2024-12-10 10:33:20.496469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.563 [2024-12-10 10:33:20.496478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114800 len:8 PRP1 0x0 PRP2 0x0 00:19:56.563 [2024-12-10 10:33:20.496518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.563 [2024-12-10 10:33:20.496540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.563 [2024-12-10 10:33:20.496565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:19:56.563 [2024-12-10 10:33:20.496577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.563 [2024-12-10 10:33:20.496600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.563 [2024-12-10 10:33:20.496609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:19:56.563 [2024-12-10 10:33:20.496622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.563 [2024-12-10 10:33:20.496645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.563 [2024-12-10 10:33:20.496654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114824 len:8 PRP1 0x0 PRP2 0x0 00:19:56.563 [2024-12-10 10:33:20.496667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496733] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xac8d80 was disconnected and freed. reset controller. 00:19:56.563 [2024-12-10 10:33:20.496752] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:56.563 [2024-12-10 10:33:20.496803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.563 [2024-12-10 10:33:20.496824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.563 [2024-12-10 10:33:20.496852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.563 [2024-12-10 10:33:20.496879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.563 [2024-12-10 10:33:20.496920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:20.496948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.563 [2024-12-10 10:33:20.496992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6cc0 (9): Bad file descriptor 00:19:56.563 [2024-12-10 10:33:20.500593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.563 [2024-12-10 10:33:20.531667] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.563 9647.20 IOPS, 37.68 MiB/s [2024-12-10T10:33:31.790Z] 9773.67 IOPS, 38.18 MiB/s [2024-12-10T10:33:31.790Z] 9848.29 IOPS, 38.47 MiB/s [2024-12-10T10:33:31.790Z] 9911.25 IOPS, 38.72 MiB/s [2024-12-10T10:33:31.790Z] 9966.00 IOPS, 38.93 MiB/s [2024-12-10T10:33:31.790Z] [2024-12-10 10:33:25.069875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.069935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.069976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.069990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.563 [2024-12-10 10:33:25.070635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.563 [2024-12-10 10:33:25.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.563 [2024-12-10 10:33:25.070700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.070984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.070997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.564 [2024-12-10 10:33:25.071278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.564 [2024-12-10 10:33:25.071703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.564 [2024-12-10 10:33:25.071718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.071984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.071997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.565 [2024-12-10 10:33:25.072433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.565 [2024-12-10 10:33:25.072828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.565 [2024-12-10 10:33:25.072840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.072853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.072866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.072880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.072892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.072906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.072918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.072938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.072951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.072965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.072991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.566 [2024-12-10 10:33:25.073302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.566 [2024-12-10 10:33:25.073520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.566 [2024-12-10 10:33:25.073580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.566 [2024-12-10 10:33:25.073591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105208 len:8 PRP1 0x0 PRP2 0x0 00:19:56.566 [2024-12-10 10:33:25.073604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073649] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc063c0 was disconnected and freed. reset controller. 00:19:56.566 [2024-12-10 10:33:25.073666] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:56.566 [2024-12-10 10:33:25.073715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.566 [2024-12-10 10:33:25.073735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.566 [2024-12-10 10:33:25.073762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.566 [2024-12-10 10:33:25.073799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.566 [2024-12-10 10:33:25.073855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.566 [2024-12-10 10:33:25.073868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.566 [2024-12-10 10:33:25.077367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.566 [2024-12-10 10:33:25.077429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6cc0 (9): Bad file descriptor 00:19:56.566 [2024-12-10 10:33:25.111581] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.566 9943.30 IOPS, 38.84 MiB/s [2024-12-10T10:33:31.793Z] 9978.27 IOPS, 38.98 MiB/s [2024-12-10T10:33:31.793Z] 10010.08 IOPS, 39.10 MiB/s [2024-12-10T10:33:31.793Z] 10029.00 IOPS, 39.18 MiB/s [2024-12-10T10:33:31.793Z] 10042.93 IOPS, 39.23 MiB/s [2024-12-10T10:33:31.793Z] 10048.60 IOPS, 39.25 MiB/s 00:19:56.566 Latency(us) 00:19:56.566 [2024-12-10T10:33:31.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.566 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.566 Verification LBA range: start 0x0 length 0x4000 00:19:56.566 NVMe0n1 : 15.01 10048.24 39.25 211.41 0.00 12447.46 536.20 13702.98 00:19:56.566 [2024-12-10T10:33:31.793Z] =================================================================================================================== 00:19:56.566 [2024-12-10T10:33:31.793Z] Total : 10048.24 39.25 211.41 0.00 12447.46 536.20 13702.98 00:19:56.566 Received shutdown signal, test time was about 15.000000 seconds 00:19:56.566 00:19:56.566 Latency(us) 00:19:56.566 [2024-12-10T10:33:31.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.566 [2024-12-10T10:33:31.793Z] =================================================================================================================== 00:19:56.566 [2024-12-10T10:33:31.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:56.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90792 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90792 /var/tmp/bdevperf.sock 00:19:56.566 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90792 ']' 00:19:56.567 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.567 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.567 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.567 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.567 10:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:56.567 10:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.567 10:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:56.567 10:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:56.567 [2024-12-10 10:33:31.462113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:56.567 10:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:56.567 [2024-12-10 10:33:31.698245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:56.567 10:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:56.826 NVMe0n1 00:19:56.826 10:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:57.393 00:19:57.393 10:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:57.652 00:19:57.652 10:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:57.652 10:33:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:57.911 10:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:58.170 10:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:01.470 10:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.470 10:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:01.470 10:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.470 10:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90861 00:20:01.470 10:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90861 00:20:02.450 { 00:20:02.450 "results": [ 00:20:02.450 { 00:20:02.450 "job": "NVMe0n1", 00:20:02.450 "core_mask": "0x1", 00:20:02.450 "workload": "verify", 00:20:02.450 "status": "finished", 00:20:02.450 "verify_range": { 00:20:02.450 "start": 0, 00:20:02.450 "length": 16384 00:20:02.450 }, 00:20:02.450 "queue_depth": 128, 00:20:02.450 "io_size": 4096, 00:20:02.450 "runtime": 1.008487, 00:20:02.450 "iops": 8012.002137856016, 00:20:02.450 "mibps": 31.296883351000062, 00:20:02.450 "io_failed": 0, 00:20:02.450 "io_timeout": 0, 00:20:02.450 "avg_latency_us": 15888.733321332133, 00:20:02.450 "min_latency_us": 1280.9309090909092, 00:20:02.450 "max_latency_us": 14596.654545454545 00:20:02.450 } 00:20:02.450 ], 00:20:02.450 "core_count": 1 00:20:02.450 } 00:20:02.450 10:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:02.450 [2024-12-10 10:33:30.992839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:02.450 [2024-12-10 10:33:30.992952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90792 ] 00:20:02.450 [2024-12-10 10:33:31.122991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.450 [2024-12-10 10:33:31.155923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.450 [2024-12-10 10:33:31.183226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:02.450 [2024-12-10 10:33:33.219791] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:02.450 [2024-12-10 10:33:33.219914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.450 [2024-12-10 10:33:33.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.450 [2024-12-10 10:33:33.219988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.450 [2024-12-10 10:33:33.220001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.450 [2024-12-10 10:33:33.220028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.450 [2024-12-10 10:33:33.220041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.450 [2024-12-10 10:33:33.220068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.450 [2024-12-10 10:33:33.220080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.451 [2024-12-10 10:33:33.220092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.451 [2024-12-10 10:33:33.220134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.451 [2024-12-10 10:33:33.220162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0cc0 (9): Bad file descriptor 00:20:02.451 [2024-12-10 10:33:33.226276] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:02.451 Running I/O for 1 seconds... 00:20:02.451 7952.00 IOPS, 31.06 MiB/s 00:20:02.451 Latency(us) 00:20:02.451 [2024-12-10T10:33:37.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.451 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:02.451 Verification LBA range: start 0x0 length 0x4000 00:20:02.451 NVMe0n1 : 1.01 8012.00 31.30 0.00 0.00 15888.73 1280.93 14596.65 00:20:02.451 [2024-12-10T10:33:37.678Z] =================================================================================================================== 00:20:02.451 [2024-12-10T10:33:37.678Z] Total : 8012.00 31.30 0.00 0.00 15888.73 1280.93 14596.65 00:20:02.451 10:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:02.451 10:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.709 10:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:02.968 10:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.968 10:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:03.228 10:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:03.487 10:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90792 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90792 ']' 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90792 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90792 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:06.776 killing process with pid 90792 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90792' 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90792 00:20:06.776 10:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90792 00:20:07.035 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:07.035 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:07.295 rmmod nvme_tcp 00:20:07.295 rmmod nvme_fabrics 00:20:07.295 rmmod nvme_keyring 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 90547 ']' 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 90547 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90547 ']' 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90547 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90547 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:07.295 killing process with pid 90547 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90547' 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90547 00:20:07.295 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90547 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:07.555 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:07.814 00:20:07.814 real 0m31.000s 00:20:07.814 user 1m59.489s 00:20:07.814 sys 0m5.359s 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:07.814 ************************************ 00:20:07.814 END TEST nvmf_failover 00:20:07.814 ************************************ 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.814 ************************************ 00:20:07.814 START TEST nvmf_host_discovery 00:20:07.814 ************************************ 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:07.814 * Looking for test storage... 00:20:07.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:20:07.814 10:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.074 --rc genhtml_branch_coverage=1 00:20:08.074 --rc genhtml_function_coverage=1 00:20:08.074 --rc genhtml_legend=1 00:20:08.074 --rc geninfo_all_blocks=1 00:20:08.074 --rc geninfo_unexecuted_blocks=1 00:20:08.074 00:20:08.074 ' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.074 --rc genhtml_branch_coverage=1 00:20:08.074 --rc genhtml_function_coverage=1 00:20:08.074 --rc genhtml_legend=1 00:20:08.074 --rc geninfo_all_blocks=1 00:20:08.074 --rc geninfo_unexecuted_blocks=1 00:20:08.074 00:20:08.074 ' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.074 --rc genhtml_branch_coverage=1 00:20:08.074 --rc genhtml_function_coverage=1 00:20:08.074 --rc genhtml_legend=1 00:20:08.074 --rc geninfo_all_blocks=1 00:20:08.074 --rc geninfo_unexecuted_blocks=1 00:20:08.074 00:20:08.074 ' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.074 --rc genhtml_branch_coverage=1 00:20:08.074 --rc genhtml_function_coverage=1 00:20:08.074 --rc genhtml_legend=1 00:20:08.074 --rc geninfo_all_blocks=1 00:20:08.074 --rc geninfo_unexecuted_blocks=1 00:20:08.074 00:20:08.074 ' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.074 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:08.075 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:08.075 Cannot find device "nvmf_init_br" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:08.075 Cannot find device "nvmf_init_br2" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:08.075 Cannot find device "nvmf_tgt_br" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.075 Cannot find device "nvmf_tgt_br2" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:08.075 Cannot find device "nvmf_init_br" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:08.075 Cannot find device "nvmf_init_br2" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:08.075 Cannot find device "nvmf_tgt_br" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:08.075 Cannot find device "nvmf_tgt_br2" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:08.075 Cannot find device "nvmf_br" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:08.075 Cannot find device "nvmf_init_if" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:08.075 Cannot find device "nvmf_init_if2" 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.075 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:08.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:20:08.335 00:20:08.335 --- 10.0.0.3 ping statistics --- 00:20:08.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.335 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:08.335 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:08.335 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:20:08.335 00:20:08.335 --- 10.0.0.4 ping statistics --- 00:20:08.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.335 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:08.335 00:20:08.335 --- 10.0.0.1 ping statistics --- 00:20:08.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.335 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:08.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:08.335 00:20:08.335 --- 10.0.0.2 ping statistics --- 00:20:08.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.335 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:08.335 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=91187 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 91187 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 91187 ']' 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.336 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.336 [2024-12-10 10:33:43.560630] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:08.336 [2024-12-10 10:33:43.560721] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.595 [2024-12-10 10:33:43.694672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.595 [2024-12-10 10:33:43.727993] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.595 [2024-12-10 10:33:43.728078] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.595 [2024-12-10 10:33:43.728103] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.595 [2024-12-10 10:33:43.728110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.595 [2024-12-10 10:33:43.728116] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.595 [2024-12-10 10:33:43.728139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.595 [2024-12-10 10:33:43.755046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:08.595 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.595 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:08.595 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:08.595 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.595 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 [2024-12-10 10:33:43.861496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 [2024-12-10 10:33:43.869628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 null0 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 null1 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91206 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91206 /tmp/host.sock 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 91206 ']' 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.854 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.854 10:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.854 [2024-12-10 10:33:43.960822] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:08.854 [2024-12-10 10:33:43.960929] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91206 ] 00:20:09.113 [2024-12-10 10:33:44.103180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.113 [2024-12-10 10:33:44.143776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.113 [2024-12-10 10:33:44.175847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.113 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.372 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:09.372 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:09.372 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.373 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 [2024-12-10 10:33:44.593748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:20:09.632 10:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:20:10.199 [2024-12-10 10:33:45.244226] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:10.199 [2024-12-10 10:33:45.244272] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:10.199 [2024-12-10 10:33:45.244289] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:10.199 [2024-12-10 10:33:45.250266] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:10.199 [2024-12-10 10:33:45.306959] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:10.199 [2024-12-10 10:33:45.306985] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:10.766 10:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 [2024-12-10 10:33:46.179033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:11.026 [2024-12-10 10:33:46.179298] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:11.026 [2024-12-10 10:33:46.179338] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:11.026 [2024-12-10 10:33:46.185299] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.026 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:11.027 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:11.027 [2024-12-10 10:33:46.247736] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:11.027 [2024-12-10 10:33:46.247754] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:11.027 [2024-12-10 10:33:46.247761] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:11.286 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.287 [2024-12-10 10:33:46.411882] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:11.287 [2024-12-10 10:33:46.411942] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:11.287 [2024-12-10 10:33:46.414896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.287 [2024-12-10 10:33:46.414945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.287 [2024-12-10 10:33:46.414974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.287 [2024-12-10 10:33:46.414982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.287 [2024-12-10 10:33:46.414991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.287 [2024-12-10 10:33:46.414999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.287 [2024-12-10 10:33:46.415008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.287 [2024-12-10 10:33:46.415016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.287 [2024-12-10 10:33:46.415024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589740 is same with the state(6) to be set 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:11.287 [2024-12-10 10:33:46.417890] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:11.287 [2024-12-10 10:33:46.417934] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:11.287 [2024-12-10 10:33:46.418006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1589740 (9): Bad file descriptor 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:11.287 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:11.547 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.548 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.806 10:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 [2024-12-10 10:33:47.825024] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:12.742 [2024-12-10 10:33:47.825047] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:12.742 [2024-12-10 10:33:47.825061] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:12.742 [2024-12-10 10:33:47.831064] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:12.742 [2024-12-10 10:33:47.891441] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:12.742 [2024-12-10 10:33:47.891477] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 request: 00:20:12.742 { 00:20:12.742 "name": "nvme", 00:20:12.742 "trtype": "tcp", 00:20:12.742 "traddr": "10.0.0.3", 00:20:12.742 "adrfam": "ipv4", 00:20:12.742 "trsvcid": "8009", 00:20:12.742 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:12.742 "wait_for_attach": true, 00:20:12.742 "method": "bdev_nvme_start_discovery", 00:20:12.742 "req_id": 1 00:20:12.742 } 00:20:12.742 Got JSON-RPC error response 00:20:12.742 response: 00:20:12.742 { 00:20:12.742 "code": -17, 00:20:12.742 "message": "File exists" 00:20:12.742 } 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:12.742 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.002 10:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.002 request: 00:20:13.002 { 00:20:13.002 "name": "nvme_second", 00:20:13.002 "trtype": "tcp", 00:20:13.002 "traddr": "10.0.0.3", 00:20:13.002 "adrfam": "ipv4", 00:20:13.002 "trsvcid": "8009", 00:20:13.002 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:13.002 "wait_for_attach": true, 00:20:13.002 "method": "bdev_nvme_start_discovery", 00:20:13.002 "req_id": 1 00:20:13.002 } 00:20:13.002 Got JSON-RPC error response 00:20:13.002 response: 00:20:13.002 { 00:20:13.002 "code": -17, 00:20:13.002 "message": "File exists" 00:20:13.002 } 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.002 10:33:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.937 [2024-12-10 10:33:49.156313] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.937 [2024-12-10 10:33:49.156374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1607ae0 with addr=10.0.0.3, port=8010 00:20:13.937 [2024-12-10 10:33:49.156393] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:13.937 [2024-12-10 10:33:49.156417] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:13.937 [2024-12-10 10:33:49.156474] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:15.314 [2024-12-10 10:33:50.156311] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:15.314 [2024-12-10 10:33:50.156371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1607ae0 with addr=10.0.0.3, port=8010 00:20:15.314 [2024-12-10 10:33:50.156389] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:15.314 [2024-12-10 10:33:50.156397] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:15.314 [2024-12-10 10:33:50.156405] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:16.251 [2024-12-10 10:33:51.156217] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:16.251 request: 00:20:16.251 { 00:20:16.251 "name": "nvme_second", 00:20:16.251 "trtype": "tcp", 00:20:16.251 "traddr": "10.0.0.3", 00:20:16.251 "adrfam": "ipv4", 00:20:16.251 "trsvcid": "8010", 00:20:16.251 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:16.251 "wait_for_attach": false, 00:20:16.251 "attach_timeout_ms": 3000, 00:20:16.251 "method": "bdev_nvme_start_discovery", 00:20:16.251 "req_id": 1 00:20:16.251 } 00:20:16.251 Got JSON-RPC error response 00:20:16.251 response: 00:20:16.251 { 00:20:16.251 "code": -110, 00:20:16.251 "message": "Connection timed out" 00:20:16.251 } 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91206 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:16.251 rmmod nvme_tcp 00:20:16.251 rmmod nvme_fabrics 00:20:16.251 rmmod nvme_keyring 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 91187 ']' 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 91187 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 91187 ']' 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 91187 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91187 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91187' 00:20:16.251 killing process with pid 91187 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 91187 00:20:16.251 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 91187 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.510 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:16.770 ************************************ 00:20:16.770 END TEST nvmf_host_discovery 00:20:16.770 ************************************ 00:20:16.770 00:20:16.770 real 0m8.863s 00:20:16.770 user 0m16.900s 00:20:16.770 sys 0m1.905s 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.770 ************************************ 00:20:16.770 START TEST nvmf_host_multipath_status 00:20:16.770 ************************************ 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:16.770 * Looking for test storage... 00:20:16.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.770 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:16.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.771 --rc genhtml_branch_coverage=1 00:20:16.771 --rc genhtml_function_coverage=1 00:20:16.771 --rc genhtml_legend=1 00:20:16.771 --rc geninfo_all_blocks=1 00:20:16.771 --rc geninfo_unexecuted_blocks=1 00:20:16.771 00:20:16.771 ' 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:16.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.771 --rc genhtml_branch_coverage=1 00:20:16.771 --rc genhtml_function_coverage=1 00:20:16.771 --rc genhtml_legend=1 00:20:16.771 --rc geninfo_all_blocks=1 00:20:16.771 --rc geninfo_unexecuted_blocks=1 00:20:16.771 00:20:16.771 ' 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:16.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.771 --rc genhtml_branch_coverage=1 00:20:16.771 --rc genhtml_function_coverage=1 00:20:16.771 --rc genhtml_legend=1 00:20:16.771 --rc geninfo_all_blocks=1 00:20:16.771 --rc geninfo_unexecuted_blocks=1 00:20:16.771 00:20:16.771 ' 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:16.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.771 --rc genhtml_branch_coverage=1 00:20:16.771 --rc genhtml_function_coverage=1 00:20:16.771 --rc genhtml_legend=1 00:20:16.771 --rc geninfo_all_blocks=1 00:20:16.771 --rc geninfo_unexecuted_blocks=1 00:20:16.771 00:20:16.771 ' 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.771 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.031 10:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:17.031 Cannot find device "nvmf_init_br" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:17.031 Cannot find device "nvmf_init_br2" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:17.031 Cannot find device "nvmf_tgt_br" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.031 Cannot find device "nvmf_tgt_br2" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:17.031 Cannot find device "nvmf_init_br" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:17.031 Cannot find device "nvmf_init_br2" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:17.031 Cannot find device "nvmf_tgt_br" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:17.031 Cannot find device "nvmf_tgt_br2" 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:17.031 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:17.031 Cannot find device "nvmf_br" 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:17.032 Cannot find device "nvmf_init_if" 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:17.032 Cannot find device "nvmf_init_if2" 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:17.032 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:17.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:17.291 00:20:17.291 --- 10.0.0.3 ping statistics --- 00:20:17.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.291 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:17.291 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:17.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:20:17.291 00:20:17.291 --- 10.0.0.4 ping statistics --- 00:20:17.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.291 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:17.291 00:20:17.291 --- 10.0.0.1 ping statistics --- 00:20:17.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.291 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:17.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:17.291 00:20:17.291 --- 10.0.0.2 ping statistics --- 00:20:17.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.291 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=91702 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 91702 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91702 ']' 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.291 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:17.291 [2024-12-10 10:33:52.475523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:17.291 [2024-12-10 10:33:52.475627] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.550 [2024-12-10 10:33:52.616355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:17.550 [2024-12-10 10:33:52.649252] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.550 [2024-12-10 10:33:52.649310] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.550 [2024-12-10 10:33:52.649320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.550 [2024-12-10 10:33:52.649326] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.550 [2024-12-10 10:33:52.649332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.550 [2024-12-10 10:33:52.649475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.550 [2024-12-10 10:33:52.649823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.550 [2024-12-10 10:33:52.676848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91702 00:20:17.550 10:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:18.118 [2024-12-10 10:33:53.053295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.118 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:18.377 Malloc0 00:20:18.377 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:18.377 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.945 10:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:18.945 [2024-12-10 10:33:54.088372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:18.945 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:19.204 [2024-12-10 10:33:54.308479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91744 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91744 /var/tmp/bdevperf.sock 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91744 ']' 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.204 10:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:20.140 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.140 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:20.140 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:20.398 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:20.657 Nvme0n1 00:20:20.657 10:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:20.916 Nvme0n1 00:20:20.916 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:20.916 10:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.450 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:23.450 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:23.450 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:23.450 10:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.857 10:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:25.115 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:25.115 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:25.115 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.115 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:25.375 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.375 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:25.375 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.375 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:25.633 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.633 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:25.634 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.634 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:25.893 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.893 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:25.893 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.893 10:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:25.893 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.893 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:25.893 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:26.460 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:26.460 10:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:27.837 10:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.096 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.096 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:28.096 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.096 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:28.096 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.096 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:28.355 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:28.355 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.614 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.614 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:28.614 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:28.614 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.873 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.873 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:28.873 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.873 10:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:29.131 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.131 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:29.131 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:29.390 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:29.390 10:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.774 10:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:31.033 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:31.033 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:31.033 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.033 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:31.292 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.292 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:31.292 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.292 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:31.551 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.551 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:31.551 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.551 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:31.810 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.810 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:31.810 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:31.810 10:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.069 10:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.070 10:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:32.070 10:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:32.328 10:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:32.587 10:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:33.525 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:33.525 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:33.525 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.525 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:33.784 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.784 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:33.784 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.784 10:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:34.043 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:34.043 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:34.043 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.043 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:34.302 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.302 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:34.302 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.302 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:34.561 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.561 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:34.561 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.561 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:34.820 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.820 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:34.820 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.820 10:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:35.079 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:35.079 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:35.079 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:35.338 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:35.338 10:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:36.716 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:36.716 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:36.716 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.717 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:36.717 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:36.717 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:36.717 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.717 10:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:36.976 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:36.976 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:36.976 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.976 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:37.235 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.235 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:37.235 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.235 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:37.494 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.494 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:37.494 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.494 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:37.753 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:37.753 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:37.753 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:37.753 10:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.012 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:38.012 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:38.012 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:38.271 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:38.530 10:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:39.467 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:39.467 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:39.467 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.467 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:39.726 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:39.726 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:39.726 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:39.726 10:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.985 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.985 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:39.985 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.985 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:40.244 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:40.244 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:40.244 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.244 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:40.503 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:40.503 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:40.503 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:40.503 10:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:41.071 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:41.071 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:41.071 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:41.071 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:41.071 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:41.071 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:41.330 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:41.330 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:41.589 10:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:41.848 10:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.227 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:43.486 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.486 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:43.486 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:43.486 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.745 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.745 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:43.745 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.745 10:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:44.005 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.005 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:44.005 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:44.005 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:44.264 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.264 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:44.264 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:44.264 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:44.523 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.523 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:44.523 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:44.782 10:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:45.041 10:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:45.976 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:45.977 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:45.977 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.977 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:46.236 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:46.236 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:46.236 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.236 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:46.495 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.495 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:46.495 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.495 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:46.754 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.754 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:46.754 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.754 10:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.322 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:47.597 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.597 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:47.598 10:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:47.870 10:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:48.129 10:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.507 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:49.766 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.766 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:49.766 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.766 10:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:50.025 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.025 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:50.025 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:50.025 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.282 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.282 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:50.282 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.282 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:50.541 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.541 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:50.541 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.541 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:50.800 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.800 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:50.800 10:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:51.059 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:51.318 10:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:52.255 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:52.255 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:52.255 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:52.255 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:52.513 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:52.513 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:52.513 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:52.513 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:52.772 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:52.772 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:52.772 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:52.772 10:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:53.031 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.031 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:53.031 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.031 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:53.598 10:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91744 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91744 ']' 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91744 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91744 00:20:53.857 killing process with pid 91744 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91744' 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91744 00:20:53.857 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91744 00:20:53.857 { 00:20:53.857 "results": [ 00:20:53.857 { 00:20:53.857 "job": "Nvme0n1", 00:20:53.857 "core_mask": "0x4", 00:20:53.857 "workload": "verify", 00:20:53.857 "status": "terminated", 00:20:53.857 "verify_range": { 00:20:53.857 "start": 0, 00:20:53.857 "length": 16384 00:20:53.857 }, 00:20:53.857 "queue_depth": 128, 00:20:53.857 "io_size": 4096, 00:20:53.857 "runtime": 32.837877, 00:20:53.857 "iops": 9734.125016669013, 00:20:53.857 "mibps": 38.02392584636333, 00:20:53.857 "io_failed": 0, 00:20:53.857 "io_timeout": 0, 00:20:53.857 "avg_latency_us": 13122.055468287845, 00:20:53.857 "min_latency_us": 711.2145454545455, 00:20:53.857 "max_latency_us": 4026531.84 00:20:53.857 } 00:20:53.857 ], 00:20:53.857 "core_count": 1 00:20:53.857 } 00:20:54.120 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91744 00:20:54.120 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:54.120 [2024-12-10 10:33:54.373705] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:54.120 [2024-12-10 10:33:54.373800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91744 ] 00:20:54.120 [2024-12-10 10:33:54.510783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.120 [2024-12-10 10:33:54.553126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.120 [2024-12-10 10:33:54.587127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:54.120 [2024-12-10 10:33:56.086486] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:54.120 Running I/O for 90 seconds... 00:20:54.120 7957.00 IOPS, 31.08 MiB/s [2024-12-10T10:34:29.347Z] 8010.00 IOPS, 31.29 MiB/s [2024-12-10T10:34:29.347Z] 8682.67 IOPS, 33.92 MiB/s [2024-12-10T10:34:29.347Z] 9140.00 IOPS, 35.70 MiB/s [2024-12-10T10:34:29.347Z] 9402.40 IOPS, 36.73 MiB/s [2024-12-10T10:34:29.347Z] 9623.50 IOPS, 37.59 MiB/s [2024-12-10T10:34:29.347Z] 9750.43 IOPS, 38.09 MiB/s [2024-12-10T10:34:29.347Z] 9828.62 IOPS, 38.39 MiB/s [2024-12-10T10:34:29.347Z] 9924.78 IOPS, 38.77 MiB/s [2024-12-10T10:34:29.347Z] 9994.70 IOPS, 39.04 MiB/s [2024-12-10T10:34:29.347Z] 10033.00 IOPS, 39.19 MiB/s [2024-12-10T10:34:29.347Z] 10082.25 IOPS, 39.38 MiB/s [2024-12-10T10:34:29.347Z] 10127.62 IOPS, 39.56 MiB/s [2024-12-10T10:34:29.347Z] 10146.50 IOPS, 39.63 MiB/s [2024-12-10T10:34:29.347Z] [2024-12-10 10:34:10.297676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.297743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.297810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.297845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.297865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.297879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.297897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.297910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.297928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.297941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.297959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.297972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.297990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.120 [2024-12-10 10:34:10.298034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:54.120 [2024-12-10 10:34:10.298300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.120 [2024-12-10 10:34:10.298313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.298614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.298970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.298988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.121 [2024-12-10 10:34:10.299434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.121 [2024-12-10 10:34:10.299710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:54.121 [2024-12-10 10:34:10.299730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.299745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.299783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.299803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.299834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.299851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.299871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.299885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.299906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.299921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.299955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.299984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.300391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.300967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.300987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.301002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.122 [2024-12-10 10:34:10.301037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.122 [2024-12-10 10:34:10.301320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:54.122 [2024-12-10 10:34:10.301358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.301374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.301982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.301995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.302928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:10.302972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:10.303394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:10.303413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:54.123 9520.20 IOPS, 37.19 MiB/s [2024-12-10T10:34:29.350Z] 8925.19 IOPS, 34.86 MiB/s [2024-12-10T10:34:29.350Z] 8400.18 IOPS, 32.81 MiB/s [2024-12-10T10:34:29.350Z] 7933.50 IOPS, 30.99 MiB/s [2024-12-10T10:34:29.350Z] 8030.32 IOPS, 31.37 MiB/s [2024-12-10T10:34:29.350Z] 8150.60 IOPS, 31.84 MiB/s [2024-12-10T10:34:29.350Z] 8339.24 IOPS, 32.58 MiB/s [2024-12-10T10:34:29.350Z] 8607.64 IOPS, 33.62 MiB/s [2024-12-10T10:34:29.350Z] 8840.48 IOPS, 34.53 MiB/s [2024-12-10T10:34:29.350Z] 9008.42 IOPS, 35.19 MiB/s [2024-12-10T10:34:29.350Z] 9077.84 IOPS, 35.46 MiB/s [2024-12-10T10:34:29.350Z] 9130.38 IOPS, 35.67 MiB/s [2024-12-10T10:34:29.350Z] 9173.07 IOPS, 35.83 MiB/s [2024-12-10T10:34:29.350Z] 9344.04 IOPS, 36.50 MiB/s [2024-12-10T10:34:29.350Z] 9505.38 IOPS, 37.13 MiB/s [2024-12-10T10:34:29.350Z] 9642.03 IOPS, 37.66 MiB/s [2024-12-10T10:34:29.350Z] [2024-12-10 10:34:26.403100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:26.403156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:26.403246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:26.403266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:26.403286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.123 [2024-12-10 10:34:26.403300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:54.123 [2024-12-10 10:34:26.403318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.123 [2024-12-10 10:34:26.403331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.403740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.403774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.403807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.403923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.403959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.403992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.124 [2024-12-10 10:34:26.404317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.124 [2024-12-10 10:34:26.404561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:54.124 [2024-12-10 10:34:26.404600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.404709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.404741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.404773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.404804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.404836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.404983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.404995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.405014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.405027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.405053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.405067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.405085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.405098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.405119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.405132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.405151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.405164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.125 [2024-12-10 10:34:26.406694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:54.125 [2024-12-10 10:34:26.406809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.125 [2024-12-10 10:34:26.406822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:54.125 9694.97 IOPS, 37.87 MiB/s [2024-12-10T10:34:29.352Z] 9717.50 IOPS, 37.96 MiB/s [2024-12-10T10:34:29.352Z] Received shutdown signal, test time was about 32.838690 seconds 00:20:54.125 00:20:54.125 Latency(us) 00:20:54.125 [2024-12-10T10:34:29.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.125 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.125 Verification LBA range: start 0x0 length 0x4000 00:20:54.125 Nvme0n1 : 32.84 9734.13 38.02 0.00 0.00 13122.06 711.21 4026531.84 00:20:54.125 [2024-12-10T10:34:29.352Z] =================================================================================================================== 00:20:54.125 [2024-12-10T10:34:29.352Z] Total : 9734.13 38.02 0.00 0.00 13122.06 711.21 4026531.84 00:20:54.125 [2024-12-10 10:34:29.065974] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:54.125 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.384 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.384 rmmod nvme_tcp 00:20:54.384 rmmod nvme_fabrics 00:20:54.384 rmmod nvme_keyring 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 91702 ']' 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 91702 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91702 ']' 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91702 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91702 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:54.643 killing process with pid 91702 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91702' 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91702 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91702 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:54.643 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:54.644 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.903 10:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:54.903 00:20:54.903 real 0m38.199s 00:20:54.903 user 2m3.771s 00:20:54.903 sys 0m10.854s 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:54.903 ************************************ 00:20:54.903 END TEST nvmf_host_multipath_status 00:20:54.903 ************************************ 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.903 ************************************ 00:20:54.903 START TEST nvmf_discovery_remove_ifc 00:20:54.903 ************************************ 00:20:54.903 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:55.163 * Looking for test storage... 00:20:55.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.163 --rc genhtml_branch_coverage=1 00:20:55.163 --rc genhtml_function_coverage=1 00:20:55.163 --rc genhtml_legend=1 00:20:55.163 --rc geninfo_all_blocks=1 00:20:55.163 --rc geninfo_unexecuted_blocks=1 00:20:55.163 00:20:55.163 ' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.163 --rc genhtml_branch_coverage=1 00:20:55.163 --rc genhtml_function_coverage=1 00:20:55.163 --rc genhtml_legend=1 00:20:55.163 --rc geninfo_all_blocks=1 00:20:55.163 --rc geninfo_unexecuted_blocks=1 00:20:55.163 00:20:55.163 ' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.163 --rc genhtml_branch_coverage=1 00:20:55.163 --rc genhtml_function_coverage=1 00:20:55.163 --rc genhtml_legend=1 00:20:55.163 --rc geninfo_all_blocks=1 00:20:55.163 --rc geninfo_unexecuted_blocks=1 00:20:55.163 00:20:55.163 ' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.163 --rc genhtml_branch_coverage=1 00:20:55.163 --rc genhtml_function_coverage=1 00:20:55.163 --rc genhtml_legend=1 00:20:55.163 --rc geninfo_all_blocks=1 00:20:55.163 --rc geninfo_unexecuted_blocks=1 00:20:55.163 00:20:55.163 ' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.163 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.163 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:55.164 Cannot find device "nvmf_init_br" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:55.164 Cannot find device "nvmf_init_br2" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:55.164 Cannot find device "nvmf_tgt_br" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.164 Cannot find device "nvmf_tgt_br2" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:55.164 Cannot find device "nvmf_init_br" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:55.164 Cannot find device "nvmf_init_br2" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:55.164 Cannot find device "nvmf_tgt_br" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:55.164 Cannot find device "nvmf_tgt_br2" 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:55.164 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:55.423 Cannot find device "nvmf_br" 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:55.423 Cannot find device "nvmf_init_if" 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:55.423 Cannot find device "nvmf_init_if2" 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:55.423 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:55.424 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.424 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:55.424 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:55.683 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.683 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:55.683 00:20:55.683 --- 10.0.0.3 ping statistics --- 00:20:55.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.683 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:55.683 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:55.683 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:20:55.683 00:20:55.683 --- 10.0.0.4 ping statistics --- 00:20:55.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.683 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:55.683 00:20:55.683 --- 10.0.0.1 ping statistics --- 00:20:55.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.683 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:55.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:20:55.683 00:20:55.683 --- 10.0.0.2 ping statistics --- 00:20:55.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.683 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=92575 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 92575 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 92575 ']' 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.683 10:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:55.683 [2024-12-10 10:34:30.792897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:55.683 [2024-12-10 10:34:30.792987] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.942 [2024-12-10 10:34:30.932358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.942 [2024-12-10 10:34:30.962361] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.942 [2024-12-10 10:34:30.962441] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.942 [2024-12-10 10:34:30.962467] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.942 [2024-12-10 10:34:30.962474] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.942 [2024-12-10 10:34:30.962480] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.942 [2024-12-10 10:34:30.962503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.942 [2024-12-10 10:34:30.988330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:55.942 [2024-12-10 10:34:31.102953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.942 [2024-12-10 10:34:31.111064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:55.942 null0 00:20:55.942 [2024-12-10 10:34:31.142991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.942 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92595 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92595 /tmp/host.sock 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 92595 ']' 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.943 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.943 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:56.202 [2024-12-10 10:34:31.228453] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:56.202 [2024-12-10 10:34:31.228546] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92595 ] 00:20:56.202 [2024-12-10 10:34:31.370666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.202 [2024-12-10 10:34:31.411442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:56.461 [2024-12-10 10:34:31.511056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.461 10:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:57.398 [2024-12-10 10:34:32.545788] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:57.398 [2024-12-10 10:34:32.545831] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:57.398 [2024-12-10 10:34:32.545848] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:57.398 [2024-12-10 10:34:32.551856] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:57.398 [2024-12-10 10:34:32.608345] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:57.398 [2024-12-10 10:34:32.608426] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:57.399 [2024-12-10 10:34:32.608453] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:57.399 [2024-12-10 10:34:32.608467] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:57.399 [2024-12-10 10:34:32.608487] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.399 [2024-12-10 10:34:32.614488] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1262290 was disconnected and freed. delete nvme_qpair. 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:57.399 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:57.658 10:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:58.595 10:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:59.972 10:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:00.908 10:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:01.844 10:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:02.780 10:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.039 10:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:03.039 10:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:03.039 [2024-12-10 10:34:38.037157] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:03.039 [2024-12-10 10:34:38.037226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.039 [2024-12-10 10:34:38.037240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.039 [2024-12-10 10:34:38.037251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.039 [2024-12-10 10:34:38.037258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.039 [2024-12-10 10:34:38.037266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.039 [2024-12-10 10:34:38.037274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.039 [2024-12-10 10:34:38.037283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.039 [2024-12-10 10:34:38.037290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.039 [2024-12-10 10:34:38.037298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.039 [2024-12-10 10:34:38.037306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.039 [2024-12-10 10:34:38.037313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123dd00 is same with the state(6) to be set 00:21:03.039 [2024-12-10 10:34:38.047151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123dd00 (9): Bad file descriptor 00:21:03.039 [2024-12-10 10:34:38.057168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:03.975 [2024-12-10 10:34:39.062467] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:03.975 [2024-12-10 10:34:39.062529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123dd00 with addr=10.0.0.3, port=4420 00:21:03.975 [2024-12-10 10:34:39.062546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123dd00 is same with the state(6) to be set 00:21:03.975 [2024-12-10 10:34:39.062575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123dd00 (9): Bad file descriptor 00:21:03.975 [2024-12-10 10:34:39.062935] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:03.975 [2024-12-10 10:34:39.062975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:03.975 [2024-12-10 10:34:39.062986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:03.975 [2024-12-10 10:34:39.062996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:03.975 [2024-12-10 10:34:39.063015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.975 [2024-12-10 10:34:39.063025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:03.975 10:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:04.912 [2024-12-10 10:34:40.063053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:04.912 [2024-12-10 10:34:40.063107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:04.912 [2024-12-10 10:34:40.063117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:04.912 [2024-12-10 10:34:40.063125] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:04.912 [2024-12-10 10:34:40.063144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.912 [2024-12-10 10:34:40.063170] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:04.912 [2024-12-10 10:34:40.063205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.912 [2024-12-10 10:34:40.063219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.912 [2024-12-10 10:34:40.063230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.912 [2024-12-10 10:34:40.063238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.912 [2024-12-10 10:34:40.063246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.912 [2024-12-10 10:34:40.063253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.912 [2024-12-10 10:34:40.063261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.912 [2024-12-10 10:34:40.063269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.912 [2024-12-10 10:34:40.063278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.912 [2024-12-10 10:34:40.063285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.912 [2024-12-10 10:34:40.063293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:04.912 [2024-12-10 10:34:40.063515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122c2a0 (9): Bad file descriptor 00:21:04.912 [2024-12-10 10:34:40.064527] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:04.912 [2024-12-10 10:34:40.064549] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:04.912 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:05.171 10:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:06.108 10:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:07.045 [2024-12-10 10:34:42.069648] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:07.045 [2024-12-10 10:34:42.069674] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:07.045 [2024-12-10 10:34:42.069705] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:07.045 [2024-12-10 10:34:42.075711] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:07.045 [2024-12-10 10:34:42.131508] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:07.045 [2024-12-10 10:34:42.131564] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:07.045 [2024-12-10 10:34:42.131584] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:07.045 [2024-12-10 10:34:42.131598] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:07.045 [2024-12-10 10:34:42.131633] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:07.045 [2024-12-10 10:34:42.138247] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1219920 was disconnected and freed. delete nvme_qpair. 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92595 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 92595 ']' 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 92595 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92595 00:21:07.305 killing process with pid 92595 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92595' 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 92595 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 92595 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:07.305 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.564 rmmod nvme_tcp 00:21:07.564 rmmod nvme_fabrics 00:21:07.564 rmmod nvme_keyring 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 92575 ']' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 92575 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 92575 ']' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 92575 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92575 00:21:07.564 killing process with pid 92575 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92575' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 92575 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 92575 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:21:07.564 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.824 10:34:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.824 10:34:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:07.824 00:21:07.824 real 0m12.929s 00:21:07.824 user 0m22.124s 00:21:07.824 sys 0m2.282s 00:21:07.824 10:34:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.824 10:34:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:07.824 ************************************ 00:21:07.824 END TEST nvmf_discovery_remove_ifc 00:21:07.824 ************************************ 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.084 ************************************ 00:21:08.084 START TEST nvmf_identify_kernel_target 00:21:08.084 ************************************ 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:08.084 * Looking for test storage... 00:21:08.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:08.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.084 --rc genhtml_branch_coverage=1 00:21:08.084 --rc genhtml_function_coverage=1 00:21:08.084 --rc genhtml_legend=1 00:21:08.084 --rc geninfo_all_blocks=1 00:21:08.084 --rc geninfo_unexecuted_blocks=1 00:21:08.084 00:21:08.084 ' 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:08.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.084 --rc genhtml_branch_coverage=1 00:21:08.084 --rc genhtml_function_coverage=1 00:21:08.084 --rc genhtml_legend=1 00:21:08.084 --rc geninfo_all_blocks=1 00:21:08.084 --rc geninfo_unexecuted_blocks=1 00:21:08.084 00:21:08.084 ' 00:21:08.084 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:08.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.084 --rc genhtml_branch_coverage=1 00:21:08.084 --rc genhtml_function_coverage=1 00:21:08.085 --rc genhtml_legend=1 00:21:08.085 --rc geninfo_all_blocks=1 00:21:08.085 --rc geninfo_unexecuted_blocks=1 00:21:08.085 00:21:08.085 ' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:08.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.085 --rc genhtml_branch_coverage=1 00:21:08.085 --rc genhtml_function_coverage=1 00:21:08.085 --rc genhtml_legend=1 00:21:08.085 --rc geninfo_all_blocks=1 00:21:08.085 --rc geninfo_unexecuted_blocks=1 00:21:08.085 00:21:08.085 ' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:08.085 Cannot find device "nvmf_init_br" 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:08.085 Cannot find device "nvmf_init_br2" 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:08.085 Cannot find device "nvmf_tgt_br" 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.085 Cannot find device "nvmf_tgt_br2" 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:08.085 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:08.085 Cannot find device "nvmf_init_br" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:08.345 Cannot find device "nvmf_init_br2" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:08.345 Cannot find device "nvmf_tgt_br" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:08.345 Cannot find device "nvmf_tgt_br2" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:08.345 Cannot find device "nvmf_br" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:08.345 Cannot find device "nvmf_init_if" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:08.345 Cannot find device "nvmf_init_if2" 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:08.345 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:08.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:21:08.605 00:21:08.605 --- 10.0.0.3 ping statistics --- 00:21:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.605 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:08.605 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:08.605 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:21:08.605 00:21:08.605 --- 10.0.0.4 ping statistics --- 00:21:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.605 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:08.605 00:21:08.605 --- 10.0.0.1 ping statistics --- 00:21:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.605 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:08.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:21:08.605 00:21:08.605 --- 10.0.0.2 ping statistics --- 00:21:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.605 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:08.605 10:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:08.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:08.864 Waiting for block devices as requested 00:21:08.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:09.124 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:09.124 No valid GPT data, bailing 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:09.124 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:09.383 No valid GPT data, bailing 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:09.383 No valid GPT data, bailing 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:09.383 No valid GPT data, bailing 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:09.383 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -a 10.0.0.1 -t tcp -s 4420 00:21:09.383 00:21:09.383 Discovery Log Number of Records 2, Generation counter 2 00:21:09.383 =====Discovery Log Entry 0====== 00:21:09.383 trtype: tcp 00:21:09.383 adrfam: ipv4 00:21:09.383 subtype: current discovery subsystem 00:21:09.383 treq: not specified, sq flow control disable supported 00:21:09.383 portid: 1 00:21:09.384 trsvcid: 4420 00:21:09.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:09.384 traddr: 10.0.0.1 00:21:09.384 eflags: none 00:21:09.384 sectype: none 00:21:09.384 =====Discovery Log Entry 1====== 00:21:09.384 trtype: tcp 00:21:09.384 adrfam: ipv4 00:21:09.384 subtype: nvme subsystem 00:21:09.384 treq: not specified, sq flow control disable supported 00:21:09.384 portid: 1 00:21:09.384 trsvcid: 4420 00:21:09.384 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:09.384 traddr: 10.0.0.1 00:21:09.384 eflags: none 00:21:09.384 sectype: none 00:21:09.384 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:09.384 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:09.647 ===================================================== 00:21:09.647 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:09.647 ===================================================== 00:21:09.647 Controller Capabilities/Features 00:21:09.647 ================================ 00:21:09.647 Vendor ID: 0000 00:21:09.647 Subsystem Vendor ID: 0000 00:21:09.647 Serial Number: 7d8cb912c6afac38213a 00:21:09.647 Model Number: Linux 00:21:09.647 Firmware Version: 6.8.9-20 00:21:09.647 Recommended Arb Burst: 0 00:21:09.647 IEEE OUI Identifier: 00 00 00 00:21:09.647 Multi-path I/O 00:21:09.647 May have multiple subsystem ports: No 00:21:09.647 May have multiple controllers: No 00:21:09.647 Associated with SR-IOV VF: No 00:21:09.647 Max Data Transfer Size: Unlimited 00:21:09.647 Max Number of Namespaces: 0 00:21:09.647 Max Number of I/O Queues: 1024 00:21:09.647 NVMe Specification Version (VS): 1.3 00:21:09.647 NVMe Specification Version (Identify): 1.3 00:21:09.647 Maximum Queue Entries: 1024 00:21:09.647 Contiguous Queues Required: No 00:21:09.647 Arbitration Mechanisms Supported 00:21:09.647 Weighted Round Robin: Not Supported 00:21:09.647 Vendor Specific: Not Supported 00:21:09.647 Reset Timeout: 7500 ms 00:21:09.647 Doorbell Stride: 4 bytes 00:21:09.647 NVM Subsystem Reset: Not Supported 00:21:09.647 Command Sets Supported 00:21:09.647 NVM Command Set: Supported 00:21:09.647 Boot Partition: Not Supported 00:21:09.647 Memory Page Size Minimum: 4096 bytes 00:21:09.647 Memory Page Size Maximum: 4096 bytes 00:21:09.647 Persistent Memory Region: Not Supported 00:21:09.647 Optional Asynchronous Events Supported 00:21:09.647 Namespace Attribute Notices: Not Supported 00:21:09.647 Firmware Activation Notices: Not Supported 00:21:09.647 ANA Change Notices: Not Supported 00:21:09.647 PLE Aggregate Log Change Notices: Not Supported 00:21:09.647 LBA Status Info Alert Notices: Not Supported 00:21:09.647 EGE Aggregate Log Change Notices: Not Supported 00:21:09.647 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.647 Zone Descriptor Change Notices: Not Supported 00:21:09.647 Discovery Log Change Notices: Supported 00:21:09.647 Controller Attributes 00:21:09.647 128-bit Host Identifier: Not Supported 00:21:09.647 Non-Operational Permissive Mode: Not Supported 00:21:09.647 NVM Sets: Not Supported 00:21:09.647 Read Recovery Levels: Not Supported 00:21:09.647 Endurance Groups: Not Supported 00:21:09.647 Predictable Latency Mode: Not Supported 00:21:09.647 Traffic Based Keep ALive: Not Supported 00:21:09.647 Namespace Granularity: Not Supported 00:21:09.647 SQ Associations: Not Supported 00:21:09.647 UUID List: Not Supported 00:21:09.647 Multi-Domain Subsystem: Not Supported 00:21:09.647 Fixed Capacity Management: Not Supported 00:21:09.647 Variable Capacity Management: Not Supported 00:21:09.647 Delete Endurance Group: Not Supported 00:21:09.647 Delete NVM Set: Not Supported 00:21:09.647 Extended LBA Formats Supported: Not Supported 00:21:09.647 Flexible Data Placement Supported: Not Supported 00:21:09.647 00:21:09.647 Controller Memory Buffer Support 00:21:09.647 ================================ 00:21:09.647 Supported: No 00:21:09.647 00:21:09.647 Persistent Memory Region Support 00:21:09.647 ================================ 00:21:09.647 Supported: No 00:21:09.647 00:21:09.647 Admin Command Set Attributes 00:21:09.647 ============================ 00:21:09.647 Security Send/Receive: Not Supported 00:21:09.647 Format NVM: Not Supported 00:21:09.647 Firmware Activate/Download: Not Supported 00:21:09.647 Namespace Management: Not Supported 00:21:09.647 Device Self-Test: Not Supported 00:21:09.647 Directives: Not Supported 00:21:09.647 NVMe-MI: Not Supported 00:21:09.647 Virtualization Management: Not Supported 00:21:09.647 Doorbell Buffer Config: Not Supported 00:21:09.647 Get LBA Status Capability: Not Supported 00:21:09.647 Command & Feature Lockdown Capability: Not Supported 00:21:09.647 Abort Command Limit: 1 00:21:09.647 Async Event Request Limit: 1 00:21:09.647 Number of Firmware Slots: N/A 00:21:09.647 Firmware Slot 1 Read-Only: N/A 00:21:09.647 Firmware Activation Without Reset: N/A 00:21:09.647 Multiple Update Detection Support: N/A 00:21:09.647 Firmware Update Granularity: No Information Provided 00:21:09.647 Per-Namespace SMART Log: No 00:21:09.647 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.647 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:09.647 Command Effects Log Page: Not Supported 00:21:09.647 Get Log Page Extended Data: Supported 00:21:09.647 Telemetry Log Pages: Not Supported 00:21:09.647 Persistent Event Log Pages: Not Supported 00:21:09.647 Supported Log Pages Log Page: May Support 00:21:09.647 Commands Supported & Effects Log Page: Not Supported 00:21:09.647 Feature Identifiers & Effects Log Page:May Support 00:21:09.647 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.647 Data Area 4 for Telemetry Log: Not Supported 00:21:09.647 Error Log Page Entries Supported: 1 00:21:09.647 Keep Alive: Not Supported 00:21:09.647 00:21:09.647 NVM Command Set Attributes 00:21:09.647 ========================== 00:21:09.647 Submission Queue Entry Size 00:21:09.647 Max: 1 00:21:09.648 Min: 1 00:21:09.648 Completion Queue Entry Size 00:21:09.648 Max: 1 00:21:09.648 Min: 1 00:21:09.648 Number of Namespaces: 0 00:21:09.648 Compare Command: Not Supported 00:21:09.648 Write Uncorrectable Command: Not Supported 00:21:09.648 Dataset Management Command: Not Supported 00:21:09.648 Write Zeroes Command: Not Supported 00:21:09.648 Set Features Save Field: Not Supported 00:21:09.648 Reservations: Not Supported 00:21:09.648 Timestamp: Not Supported 00:21:09.648 Copy: Not Supported 00:21:09.648 Volatile Write Cache: Not Present 00:21:09.648 Atomic Write Unit (Normal): 1 00:21:09.648 Atomic Write Unit (PFail): 1 00:21:09.648 Atomic Compare & Write Unit: 1 00:21:09.648 Fused Compare & Write: Not Supported 00:21:09.648 Scatter-Gather List 00:21:09.648 SGL Command Set: Supported 00:21:09.648 SGL Keyed: Not Supported 00:21:09.648 SGL Bit Bucket Descriptor: Not Supported 00:21:09.648 SGL Metadata Pointer: Not Supported 00:21:09.648 Oversized SGL: Not Supported 00:21:09.648 SGL Metadata Address: Not Supported 00:21:09.648 SGL Offset: Supported 00:21:09.648 Transport SGL Data Block: Not Supported 00:21:09.648 Replay Protected Memory Block: Not Supported 00:21:09.648 00:21:09.648 Firmware Slot Information 00:21:09.648 ========================= 00:21:09.648 Active slot: 0 00:21:09.648 00:21:09.648 00:21:09.648 Error Log 00:21:09.648 ========= 00:21:09.648 00:21:09.648 Active Namespaces 00:21:09.648 ================= 00:21:09.648 Discovery Log Page 00:21:09.648 ================== 00:21:09.648 Generation Counter: 2 00:21:09.648 Number of Records: 2 00:21:09.648 Record Format: 0 00:21:09.648 00:21:09.648 Discovery Log Entry 0 00:21:09.648 ---------------------- 00:21:09.648 Transport Type: 3 (TCP) 00:21:09.648 Address Family: 1 (IPv4) 00:21:09.648 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:09.648 Entry Flags: 00:21:09.648 Duplicate Returned Information: 0 00:21:09.648 Explicit Persistent Connection Support for Discovery: 0 00:21:09.648 Transport Requirements: 00:21:09.648 Secure Channel: Not Specified 00:21:09.648 Port ID: 1 (0x0001) 00:21:09.648 Controller ID: 65535 (0xffff) 00:21:09.648 Admin Max SQ Size: 32 00:21:09.648 Transport Service Identifier: 4420 00:21:09.648 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:09.648 Transport Address: 10.0.0.1 00:21:09.648 Discovery Log Entry 1 00:21:09.648 ---------------------- 00:21:09.648 Transport Type: 3 (TCP) 00:21:09.648 Address Family: 1 (IPv4) 00:21:09.648 Subsystem Type: 2 (NVM Subsystem) 00:21:09.648 Entry Flags: 00:21:09.648 Duplicate Returned Information: 0 00:21:09.648 Explicit Persistent Connection Support for Discovery: 0 00:21:09.648 Transport Requirements: 00:21:09.648 Secure Channel: Not Specified 00:21:09.648 Port ID: 1 (0x0001) 00:21:09.648 Controller ID: 65535 (0xffff) 00:21:09.648 Admin Max SQ Size: 32 00:21:09.648 Transport Service Identifier: 4420 00:21:09.648 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:09.648 Transport Address: 10.0.0.1 00:21:09.648 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:09.946 get_feature(0x01) failed 00:21:09.946 get_feature(0x02) failed 00:21:09.946 get_feature(0x04) failed 00:21:09.946 ===================================================== 00:21:09.946 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:09.946 ===================================================== 00:21:09.946 Controller Capabilities/Features 00:21:09.946 ================================ 00:21:09.946 Vendor ID: 0000 00:21:09.946 Subsystem Vendor ID: 0000 00:21:09.946 Serial Number: fa2f631e54ff6e009f12 00:21:09.946 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:09.946 Firmware Version: 6.8.9-20 00:21:09.946 Recommended Arb Burst: 6 00:21:09.946 IEEE OUI Identifier: 00 00 00 00:21:09.946 Multi-path I/O 00:21:09.946 May have multiple subsystem ports: Yes 00:21:09.946 May have multiple controllers: Yes 00:21:09.946 Associated with SR-IOV VF: No 00:21:09.946 Max Data Transfer Size: Unlimited 00:21:09.946 Max Number of Namespaces: 1024 00:21:09.946 Max Number of I/O Queues: 128 00:21:09.946 NVMe Specification Version (VS): 1.3 00:21:09.946 NVMe Specification Version (Identify): 1.3 00:21:09.946 Maximum Queue Entries: 1024 00:21:09.946 Contiguous Queues Required: No 00:21:09.946 Arbitration Mechanisms Supported 00:21:09.946 Weighted Round Robin: Not Supported 00:21:09.946 Vendor Specific: Not Supported 00:21:09.946 Reset Timeout: 7500 ms 00:21:09.946 Doorbell Stride: 4 bytes 00:21:09.946 NVM Subsystem Reset: Not Supported 00:21:09.946 Command Sets Supported 00:21:09.946 NVM Command Set: Supported 00:21:09.946 Boot Partition: Not Supported 00:21:09.946 Memory Page Size Minimum: 4096 bytes 00:21:09.946 Memory Page Size Maximum: 4096 bytes 00:21:09.946 Persistent Memory Region: Not Supported 00:21:09.946 Optional Asynchronous Events Supported 00:21:09.946 Namespace Attribute Notices: Supported 00:21:09.946 Firmware Activation Notices: Not Supported 00:21:09.946 ANA Change Notices: Supported 00:21:09.946 PLE Aggregate Log Change Notices: Not Supported 00:21:09.946 LBA Status Info Alert Notices: Not Supported 00:21:09.946 EGE Aggregate Log Change Notices: Not Supported 00:21:09.946 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.946 Zone Descriptor Change Notices: Not Supported 00:21:09.946 Discovery Log Change Notices: Not Supported 00:21:09.946 Controller Attributes 00:21:09.946 128-bit Host Identifier: Supported 00:21:09.946 Non-Operational Permissive Mode: Not Supported 00:21:09.946 NVM Sets: Not Supported 00:21:09.946 Read Recovery Levels: Not Supported 00:21:09.946 Endurance Groups: Not Supported 00:21:09.946 Predictable Latency Mode: Not Supported 00:21:09.946 Traffic Based Keep ALive: Supported 00:21:09.946 Namespace Granularity: Not Supported 00:21:09.946 SQ Associations: Not Supported 00:21:09.946 UUID List: Not Supported 00:21:09.946 Multi-Domain Subsystem: Not Supported 00:21:09.946 Fixed Capacity Management: Not Supported 00:21:09.946 Variable Capacity Management: Not Supported 00:21:09.946 Delete Endurance Group: Not Supported 00:21:09.946 Delete NVM Set: Not Supported 00:21:09.946 Extended LBA Formats Supported: Not Supported 00:21:09.946 Flexible Data Placement Supported: Not Supported 00:21:09.946 00:21:09.946 Controller Memory Buffer Support 00:21:09.946 ================================ 00:21:09.946 Supported: No 00:21:09.946 00:21:09.946 Persistent Memory Region Support 00:21:09.946 ================================ 00:21:09.946 Supported: No 00:21:09.946 00:21:09.946 Admin Command Set Attributes 00:21:09.946 ============================ 00:21:09.946 Security Send/Receive: Not Supported 00:21:09.946 Format NVM: Not Supported 00:21:09.946 Firmware Activate/Download: Not Supported 00:21:09.946 Namespace Management: Not Supported 00:21:09.946 Device Self-Test: Not Supported 00:21:09.946 Directives: Not Supported 00:21:09.946 NVMe-MI: Not Supported 00:21:09.946 Virtualization Management: Not Supported 00:21:09.946 Doorbell Buffer Config: Not Supported 00:21:09.946 Get LBA Status Capability: Not Supported 00:21:09.946 Command & Feature Lockdown Capability: Not Supported 00:21:09.946 Abort Command Limit: 4 00:21:09.946 Async Event Request Limit: 4 00:21:09.946 Number of Firmware Slots: N/A 00:21:09.946 Firmware Slot 1 Read-Only: N/A 00:21:09.946 Firmware Activation Without Reset: N/A 00:21:09.946 Multiple Update Detection Support: N/A 00:21:09.946 Firmware Update Granularity: No Information Provided 00:21:09.946 Per-Namespace SMART Log: Yes 00:21:09.946 Asymmetric Namespace Access Log Page: Supported 00:21:09.946 ANA Transition Time : 10 sec 00:21:09.946 00:21:09.946 Asymmetric Namespace Access Capabilities 00:21:09.946 ANA Optimized State : Supported 00:21:09.946 ANA Non-Optimized State : Supported 00:21:09.946 ANA Inaccessible State : Supported 00:21:09.946 ANA Persistent Loss State : Supported 00:21:09.946 ANA Change State : Supported 00:21:09.946 ANAGRPID is not changed : No 00:21:09.946 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:09.946 00:21:09.946 ANA Group Identifier Maximum : 128 00:21:09.946 Number of ANA Group Identifiers : 128 00:21:09.946 Max Number of Allowed Namespaces : 1024 00:21:09.946 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:09.946 Command Effects Log Page: Supported 00:21:09.946 Get Log Page Extended Data: Supported 00:21:09.946 Telemetry Log Pages: Not Supported 00:21:09.946 Persistent Event Log Pages: Not Supported 00:21:09.946 Supported Log Pages Log Page: May Support 00:21:09.946 Commands Supported & Effects Log Page: Not Supported 00:21:09.946 Feature Identifiers & Effects Log Page:May Support 00:21:09.946 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.946 Data Area 4 for Telemetry Log: Not Supported 00:21:09.946 Error Log Page Entries Supported: 128 00:21:09.946 Keep Alive: Supported 00:21:09.946 Keep Alive Granularity: 1000 ms 00:21:09.946 00:21:09.946 NVM Command Set Attributes 00:21:09.946 ========================== 00:21:09.946 Submission Queue Entry Size 00:21:09.946 Max: 64 00:21:09.946 Min: 64 00:21:09.946 Completion Queue Entry Size 00:21:09.946 Max: 16 00:21:09.946 Min: 16 00:21:09.946 Number of Namespaces: 1024 00:21:09.946 Compare Command: Not Supported 00:21:09.946 Write Uncorrectable Command: Not Supported 00:21:09.946 Dataset Management Command: Supported 00:21:09.946 Write Zeroes Command: Supported 00:21:09.946 Set Features Save Field: Not Supported 00:21:09.946 Reservations: Not Supported 00:21:09.946 Timestamp: Not Supported 00:21:09.946 Copy: Not Supported 00:21:09.946 Volatile Write Cache: Present 00:21:09.946 Atomic Write Unit (Normal): 1 00:21:09.946 Atomic Write Unit (PFail): 1 00:21:09.946 Atomic Compare & Write Unit: 1 00:21:09.946 Fused Compare & Write: Not Supported 00:21:09.946 Scatter-Gather List 00:21:09.946 SGL Command Set: Supported 00:21:09.946 SGL Keyed: Not Supported 00:21:09.946 SGL Bit Bucket Descriptor: Not Supported 00:21:09.946 SGL Metadata Pointer: Not Supported 00:21:09.946 Oversized SGL: Not Supported 00:21:09.946 SGL Metadata Address: Not Supported 00:21:09.946 SGL Offset: Supported 00:21:09.946 Transport SGL Data Block: Not Supported 00:21:09.946 Replay Protected Memory Block: Not Supported 00:21:09.946 00:21:09.946 Firmware Slot Information 00:21:09.946 ========================= 00:21:09.946 Active slot: 0 00:21:09.946 00:21:09.946 Asymmetric Namespace Access 00:21:09.946 =========================== 00:21:09.946 Change Count : 0 00:21:09.946 Number of ANA Group Descriptors : 1 00:21:09.946 ANA Group Descriptor : 0 00:21:09.946 ANA Group ID : 1 00:21:09.946 Number of NSID Values : 1 00:21:09.946 Change Count : 0 00:21:09.946 ANA State : 1 00:21:09.946 Namespace Identifier : 1 00:21:09.946 00:21:09.946 Commands Supported and Effects 00:21:09.946 ============================== 00:21:09.946 Admin Commands 00:21:09.946 -------------- 00:21:09.946 Get Log Page (02h): Supported 00:21:09.946 Identify (06h): Supported 00:21:09.946 Abort (08h): Supported 00:21:09.946 Set Features (09h): Supported 00:21:09.947 Get Features (0Ah): Supported 00:21:09.947 Asynchronous Event Request (0Ch): Supported 00:21:09.947 Keep Alive (18h): Supported 00:21:09.947 I/O Commands 00:21:09.947 ------------ 00:21:09.947 Flush (00h): Supported 00:21:09.947 Write (01h): Supported LBA-Change 00:21:09.947 Read (02h): Supported 00:21:09.947 Write Zeroes (08h): Supported LBA-Change 00:21:09.947 Dataset Management (09h): Supported 00:21:09.947 00:21:09.947 Error Log 00:21:09.947 ========= 00:21:09.947 Entry: 0 00:21:09.947 Error Count: 0x3 00:21:09.947 Submission Queue Id: 0x0 00:21:09.947 Command Id: 0x5 00:21:09.947 Phase Bit: 0 00:21:09.947 Status Code: 0x2 00:21:09.947 Status Code Type: 0x0 00:21:09.947 Do Not Retry: 1 00:21:09.947 Error Location: 0x28 00:21:09.947 LBA: 0x0 00:21:09.947 Namespace: 0x0 00:21:09.947 Vendor Log Page: 0x0 00:21:09.947 ----------- 00:21:09.947 Entry: 1 00:21:09.947 Error Count: 0x2 00:21:09.947 Submission Queue Id: 0x0 00:21:09.947 Command Id: 0x5 00:21:09.947 Phase Bit: 0 00:21:09.947 Status Code: 0x2 00:21:09.947 Status Code Type: 0x0 00:21:09.947 Do Not Retry: 1 00:21:09.947 Error Location: 0x28 00:21:09.947 LBA: 0x0 00:21:09.947 Namespace: 0x0 00:21:09.947 Vendor Log Page: 0x0 00:21:09.947 ----------- 00:21:09.947 Entry: 2 00:21:09.947 Error Count: 0x1 00:21:09.947 Submission Queue Id: 0x0 00:21:09.947 Command Id: 0x4 00:21:09.947 Phase Bit: 0 00:21:09.947 Status Code: 0x2 00:21:09.947 Status Code Type: 0x0 00:21:09.947 Do Not Retry: 1 00:21:09.947 Error Location: 0x28 00:21:09.947 LBA: 0x0 00:21:09.947 Namespace: 0x0 00:21:09.947 Vendor Log Page: 0x0 00:21:09.947 00:21:09.947 Number of Queues 00:21:09.947 ================ 00:21:09.947 Number of I/O Submission Queues: 128 00:21:09.947 Number of I/O Completion Queues: 128 00:21:09.947 00:21:09.947 ZNS Specific Controller Data 00:21:09.947 ============================ 00:21:09.947 Zone Append Size Limit: 0 00:21:09.947 00:21:09.947 00:21:09.947 Active Namespaces 00:21:09.947 ================= 00:21:09.947 get_feature(0x05) failed 00:21:09.947 Namespace ID:1 00:21:09.947 Command Set Identifier: NVM (00h) 00:21:09.947 Deallocate: Supported 00:21:09.947 Deallocated/Unwritten Error: Not Supported 00:21:09.947 Deallocated Read Value: Unknown 00:21:09.947 Deallocate in Write Zeroes: Not Supported 00:21:09.947 Deallocated Guard Field: 0xFFFF 00:21:09.947 Flush: Supported 00:21:09.947 Reservation: Not Supported 00:21:09.947 Namespace Sharing Capabilities: Multiple Controllers 00:21:09.947 Size (in LBAs): 1310720 (5GiB) 00:21:09.947 Capacity (in LBAs): 1310720 (5GiB) 00:21:09.947 Utilization (in LBAs): 1310720 (5GiB) 00:21:09.947 UUID: 6cfd7174-3e47-44ca-8c19-046cbabc1fa4 00:21:09.947 Thin Provisioning: Not Supported 00:21:09.947 Per-NS Atomic Units: Yes 00:21:09.947 Atomic Boundary Size (Normal): 0 00:21:09.947 Atomic Boundary Size (PFail): 0 00:21:09.947 Atomic Boundary Offset: 0 00:21:09.947 NGUID/EUI64 Never Reused: No 00:21:09.947 ANA group ID: 1 00:21:09.947 Namespace Write Protected: No 00:21:09.947 Number of LBA Formats: 1 00:21:09.947 Current LBA Format: LBA Format #00 00:21:09.947 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:09.947 00:21:09.947 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:09.947 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:09.947 10:34:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.947 rmmod nvme_tcp 00:21:09.947 rmmod nvme_fabrics 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:09.947 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:10.249 10:34:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:11.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.187 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:11.187 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:11.187 ************************************ 00:21:11.187 END TEST nvmf_identify_kernel_target 00:21:11.187 ************************************ 00:21:11.187 00:21:11.187 real 0m3.224s 00:21:11.187 user 0m1.165s 00:21:11.187 sys 0m1.415s 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.187 ************************************ 00:21:11.187 START TEST nvmf_auth_host 00:21:11.187 ************************************ 00:21:11.187 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:11.447 * Looking for test storage... 00:21:11.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.447 --rc genhtml_branch_coverage=1 00:21:11.447 --rc genhtml_function_coverage=1 00:21:11.447 --rc genhtml_legend=1 00:21:11.447 --rc geninfo_all_blocks=1 00:21:11.447 --rc geninfo_unexecuted_blocks=1 00:21:11.447 00:21:11.447 ' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.447 --rc genhtml_branch_coverage=1 00:21:11.447 --rc genhtml_function_coverage=1 00:21:11.447 --rc genhtml_legend=1 00:21:11.447 --rc geninfo_all_blocks=1 00:21:11.447 --rc geninfo_unexecuted_blocks=1 00:21:11.447 00:21:11.447 ' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.447 --rc genhtml_branch_coverage=1 00:21:11.447 --rc genhtml_function_coverage=1 00:21:11.447 --rc genhtml_legend=1 00:21:11.447 --rc geninfo_all_blocks=1 00:21:11.447 --rc geninfo_unexecuted_blocks=1 00:21:11.447 00:21:11.447 ' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.447 --rc genhtml_branch_coverage=1 00:21:11.447 --rc genhtml_function_coverage=1 00:21:11.447 --rc genhtml_legend=1 00:21:11.447 --rc geninfo_all_blocks=1 00:21:11.447 --rc geninfo_unexecuted_blocks=1 00:21:11.447 00:21:11.447 ' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.447 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:11.448 Cannot find device "nvmf_init_br" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:11.448 Cannot find device "nvmf_init_br2" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:11.448 Cannot find device "nvmf_tgt_br" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:11.448 Cannot find device "nvmf_tgt_br2" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:11.448 Cannot find device "nvmf_init_br" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:11.448 Cannot find device "nvmf_init_br2" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:11.448 Cannot find device "nvmf_tgt_br" 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:11.448 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:11.708 Cannot find device "nvmf_tgt_br2" 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:11.708 Cannot find device "nvmf_br" 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:11.708 Cannot find device "nvmf_init_if" 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:11.708 Cannot find device "nvmf_init_if2" 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:11.708 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:11.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:11.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:21:11.967 00:21:11.967 --- 10.0.0.3 ping statistics --- 00:21:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.967 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:11.967 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:11.967 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:21:11.967 00:21:11.967 --- 10.0.0.4 ping statistics --- 00:21:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.967 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:11.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:11.967 00:21:11.967 --- 10.0.0.1 ping statistics --- 00:21:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.967 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:11.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:21:11.967 00:21:11.967 --- 10.0.0.2 ping statistics --- 00:21:11.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.967 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.967 10:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=93591 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 93591 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 93591 ']' 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.967 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=1063f89a0eb1934610cf5bb03f7beb4b 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.fp0 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 1063f89a0eb1934610cf5bb03f7beb4b 0 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 1063f89a0eb1934610cf5bb03f7beb4b 0 00:21:12.226 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.227 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.227 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=1063f89a0eb1934610cf5bb03f7beb4b 00:21:12.227 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:21:12.227 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.227 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.fp0 00:21:12.227 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.fp0 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fp0 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=11447a82926af1394f3a8391f6fdd997bc9a47c257d267e93c7983043a970957 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.xtP 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 11447a82926af1394f3a8391f6fdd997bc9a47c257d267e93c7983043a970957 3 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 11447a82926af1394f3a8391f6fdd997bc9a47c257d267e93c7983043a970957 3 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=11447a82926af1394f3a8391f6fdd997bc9a47c257d267e93c7983043a970957 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.xtP 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.xtP 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xtP 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=280d6a7d7e9b2c52088a4be9df75dbd1079022983fe290a8 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.pZD 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 280d6a7d7e9b2c52088a4be9df75dbd1079022983fe290a8 0 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 280d6a7d7e9b2c52088a4be9df75dbd1079022983fe290a8 0 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=280d6a7d7e9b2c52088a4be9df75dbd1079022983fe290a8 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:21:12.486 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.pZD 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.pZD 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pZD 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7b831dd458ad1a3c8d38d0639ae8c05cb872d1f20ec77635 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.WJ9 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7b831dd458ad1a3c8d38d0639ae8c05cb872d1f20ec77635 2 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7b831dd458ad1a3c8d38d0639ae8c05cb872d1f20ec77635 2 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7b831dd458ad1a3c8d38d0639ae8c05cb872d1f20ec77635 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.WJ9 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.WJ9 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WJ9 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=abd3dd628bafdde955479736e6fe935d 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.8s4 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key abd3dd628bafdde955479736e6fe935d 1 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 abd3dd628bafdde955479736e6fe935d 1 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=abd3dd628bafdde955479736e6fe935d 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:21:12.487 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.8s4 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.8s4 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8s4 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:21:12.746 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=042fdc67b5a4943ec48fcb1dfae3e1d8 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.HCf 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 042fdc67b5a4943ec48fcb1dfae3e1d8 1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 042fdc67b5a4943ec48fcb1dfae3e1d8 1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=042fdc67b5a4943ec48fcb1dfae3e1d8 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.HCf 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.HCf 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HCf 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b7196ebf0b5438cd76973a8caf2cdaea858692d56972b077 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.CmZ 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b7196ebf0b5438cd76973a8caf2cdaea858692d56972b077 2 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b7196ebf0b5438cd76973a8caf2cdaea858692d56972b077 2 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b7196ebf0b5438cd76973a8caf2cdaea858692d56972b077 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.CmZ 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.CmZ 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CmZ 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=73a8c26547cbf535492e59955bdd8ad2 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.F6u 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 73a8c26547cbf535492e59955bdd8ad2 0 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 73a8c26547cbf535492e59955bdd8ad2 0 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=73a8c26547cbf535492e59955bdd8ad2 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.F6u 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.F6u 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.F6u 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8ee0644d96aec3eb845163fa42513057d5bc18e8c7326268da3ea448d5505e72 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.nGj 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8ee0644d96aec3eb845163fa42513057d5bc18e8c7326268da3ea448d5505e72 3 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8ee0644d96aec3eb845163fa42513057d5bc18e8c7326268da3ea448d5505e72 3 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8ee0644d96aec3eb845163fa42513057d5bc18e8c7326268da3ea448d5505e72 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:21:12.747 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.nGj 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.nGj 00:21:13.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nGj 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93591 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 93591 ']' 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.006 10:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fp0 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xtP ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xtP 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pZD 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WJ9 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WJ9 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8s4 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HCf ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HCf 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CmZ 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.F6u ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.F6u 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nGj 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:13.266 10:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:13.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.784 Waiting for block devices as requested 00:21:13.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.784 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:14.351 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:14.611 No valid GPT data, bailing 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:14.611 No valid GPT data, bailing 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:14.611 No valid GPT data, bailing 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:14.611 No valid GPT data, bailing 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:21:14.611 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -a 10.0.0.1 -t tcp -s 4420 00:21:14.871 00:21:14.871 Discovery Log Number of Records 2, Generation counter 2 00:21:14.871 =====Discovery Log Entry 0====== 00:21:14.871 trtype: tcp 00:21:14.871 adrfam: ipv4 00:21:14.871 subtype: current discovery subsystem 00:21:14.871 treq: not specified, sq flow control disable supported 00:21:14.871 portid: 1 00:21:14.871 trsvcid: 4420 00:21:14.871 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:14.871 traddr: 10.0.0.1 00:21:14.871 eflags: none 00:21:14.871 sectype: none 00:21:14.871 =====Discovery Log Entry 1====== 00:21:14.871 trtype: tcp 00:21:14.871 adrfam: ipv4 00:21:14.871 subtype: nvme subsystem 00:21:14.871 treq: not specified, sq flow control disable supported 00:21:14.871 portid: 1 00:21:14.871 trsvcid: 4420 00:21:14.871 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:14.871 traddr: 10.0.0.1 00:21:14.871 eflags: none 00:21:14.871 sectype: none 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.871 10:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.871 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 nvme0n1 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 nvme0n1 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 nvme0n1 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.391 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.392 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.651 nvme0n1 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.651 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.652 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:15.652 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.652 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 nvme0n1 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 10:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 nvme0n1 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.911 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.912 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:15.912 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.912 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:15.912 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:15.912 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.912 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.480 nvme0n1 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.480 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.481 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.740 nvme0n1 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:16.740 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.741 nvme0n1 00:21:16.741 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.000 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.000 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.000 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.000 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.000 10:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.000 nvme0n1 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.000 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.001 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 nvme0n1 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:17.260 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:17.261 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:17.261 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:17.261 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.261 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:17.827 10:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.827 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:17.827 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:17.827 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:17.827 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.827 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.827 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.085 nvme0n1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.085 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 nvme0n1 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.344 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.603 nvme0n1 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.603 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.604 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.863 nvme0n1 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.863 10:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.863 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 nvme0n1 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.122 10:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.034 10:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.034 nvme0n1 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.034 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.035 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.294 nvme0n1 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.294 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.862 nvme0n1 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.862 10:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.121 nvme0n1 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:22.121 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.122 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.380 nvme0n1 00:21:22.381 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.381 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.381 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.381 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.381 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.381 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.640 10:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.208 nvme0n1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.208 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.777 nvme0n1 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.777 10:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.346 nvme0n1 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.346 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 nvme0n1 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.914 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.915 10:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.483 nvme0n1 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.483 nvme0n1 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.483 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.743 nvme0n1 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:25.743 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.744 10:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.003 nvme0n1 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.003 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.004 nvme0n1 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.004 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.263 nvme0n1 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.263 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.264 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.524 nvme0n1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.524 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.783 nvme0n1 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.783 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.784 10:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.784 nvme0n1 00:21:26.784 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.784 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.784 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.784 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.784 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.043 nvme0n1 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.043 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 nvme0n1 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.303 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:27.304 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:27.304 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:27.304 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.304 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.304 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.563 nvme0n1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.563 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.823 nvme0n1 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.823 10:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.823 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.082 nvme0n1 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.082 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.083 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.342 nvme0n1 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:28.342 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.343 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.602 nvme0n1 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:28.602 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.603 10:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.861 nvme0n1 00:21:28.861 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.120 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.379 nvme0n1 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:29.379 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.380 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 nvme0n1 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.947 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.948 10:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.206 nvme0n1 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.206 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.207 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.465 nvme0n1 00:21:30.465 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.465 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.465 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.465 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.465 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.724 10:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.320 nvme0n1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.320 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.888 nvme0n1 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:31.888 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.889 10:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.889 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.456 nvme0n1 00:21:32.456 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.456 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.456 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.456 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.456 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.457 10:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.025 nvme0n1 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:33.025 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.026 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 nvme0n1 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.594 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.854 nvme0n1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.854 10:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.854 nvme0n1 00:21:33.854 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.114 nvme0n1 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.114 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.374 nvme0n1 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.374 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.634 nvme0n1 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.634 nvme0n1 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.634 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.894 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.895 10:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.895 nvme0n1 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.895 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.154 nvme0n1 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:35.154 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.155 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:35.155 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:35.155 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:35.155 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:35.155 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.155 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.414 nvme0n1 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.414 nvme0n1 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.414 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:35.673 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.674 nvme0n1 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.674 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 10:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 nvme0n1 00:21:35.933 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.933 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.933 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.193 nvme0n1 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.193 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.452 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.453 nvme0n1 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.453 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:36.712 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:36.713 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:36.713 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:36.713 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.713 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.713 nvme0n1 00:21:36.713 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:36.972 10:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.972 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.231 nvme0n1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.231 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.490 nvme0n1 00:21:37.490 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.490 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.490 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.490 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.490 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.750 10:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.011 nvme0n1 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.011 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.270 nvme0n1 00:21:38.270 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.271 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.271 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.271 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.271 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.271 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.530 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.789 nvme0n1 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA2M2Y4OWEwZWIxOTM0NjEwY2Y1YmIwM2Y3YmViNGIk9mg4: 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE0NDdhODI5MjZhZjEzOTRmM2E4MzkxZjZmZGQ5OTdiYzlhNDdjMjU3ZDI2N2U5M2M3OTgzMDQzYTk3MDk1NwkeGko=: 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.789 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.790 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.790 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.790 10:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 nvme0n1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.358 10:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.927 nvme0n1 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.927 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.495 nvme0n1 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjcxOTZlYmYwYjU0MzhjZDc2OTczYThjYWYyY2RhZWE4NTg2OTJkNTY5NzJiMDc38CkYTg==: 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNhOGMyNjU0N2NiZjUzNTQ5MmU1OTk1NWJkZDhhZDJwPwFq: 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.495 10:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.064 nvme0n1 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlMDY0NGQ5NmFlYzNlYjg0NTE2M2ZhNDI1MTMwNTdkNWJjMThlOGM3MzI2MjY4ZGEzZWE0NDhkNTUwNWU3MtoQzHU=: 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.064 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.633 nvme0n1 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.633 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.893 request: 00:21:41.893 { 00:21:41.893 "name": "nvme0", 00:21:41.893 "trtype": "tcp", 00:21:41.893 "traddr": "10.0.0.1", 00:21:41.893 "adrfam": "ipv4", 00:21:41.893 "trsvcid": "4420", 00:21:41.893 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:41.893 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:41.893 "prchk_reftag": false, 00:21:41.893 "prchk_guard": false, 00:21:41.893 "hdgst": false, 00:21:41.893 "ddgst": false, 00:21:41.893 "allow_unrecognized_csi": false, 00:21:41.893 "method": "bdev_nvme_attach_controller", 00:21:41.893 "req_id": 1 00:21:41.893 } 00:21:41.893 Got JSON-RPC error response 00:21:41.893 response: 00:21:41.893 { 00:21:41.893 "code": -5, 00:21:41.893 "message": "Input/output error" 00:21:41.893 } 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:41.893 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.894 request: 00:21:41.894 { 00:21:41.894 "name": "nvme0", 00:21:41.894 "trtype": "tcp", 00:21:41.894 "traddr": "10.0.0.1", 00:21:41.894 "adrfam": "ipv4", 00:21:41.894 "trsvcid": "4420", 00:21:41.894 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:41.894 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:41.894 "prchk_reftag": false, 00:21:41.894 "prchk_guard": false, 00:21:41.894 "hdgst": false, 00:21:41.894 "ddgst": false, 00:21:41.894 "dhchap_key": "key2", 00:21:41.894 "allow_unrecognized_csi": false, 00:21:41.894 "method": "bdev_nvme_attach_controller", 00:21:41.894 "req_id": 1 00:21:41.894 } 00:21:41.894 Got JSON-RPC error response 00:21:41.894 response: 00:21:41.894 { 00:21:41.894 "code": -5, 00:21:41.894 "message": "Input/output error" 00:21:41.894 } 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.894 10:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.894 request: 00:21:41.894 { 00:21:41.894 "name": "nvme0", 00:21:41.894 "trtype": "tcp", 00:21:41.894 "traddr": "10.0.0.1", 00:21:41.894 "adrfam": "ipv4", 00:21:41.894 "trsvcid": "4420", 00:21:41.894 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:41.894 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:41.894 "prchk_reftag": false, 00:21:41.894 "prchk_guard": false, 00:21:41.894 "hdgst": false, 00:21:41.894 "ddgst": false, 00:21:41.894 "dhchap_key": "key1", 00:21:41.894 "dhchap_ctrlr_key": "ckey2", 00:21:41.894 "allow_unrecognized_csi": false, 00:21:41.894 "method": "bdev_nvme_attach_controller", 00:21:41.894 "req_id": 1 00:21:41.894 } 00:21:41.894 Got JSON-RPC error response 00:21:41.894 response: 00:21:41.894 { 00:21:41.894 "code": -5, 00:21:41.894 "message": "Input/output error" 00:21:41.894 } 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.894 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.153 nvme0n1 00:21:42.153 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.153 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:42.153 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.154 request: 00:21:42.154 { 00:21:42.154 "name": "nvme0", 00:21:42.154 "dhchap_key": "key1", 00:21:42.154 "dhchap_ctrlr_key": "ckey2", 00:21:42.154 "method": "bdev_nvme_set_keys", 00:21:42.154 "req_id": 1 00:21:42.154 } 00:21:42.154 Got JSON-RPC error response 00:21:42.154 response: 00:21:42.154 { 00:21:42.154 "code": -13, 00:21:42.154 "message": "Permission denied" 00:21:42.154 } 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:42.154 10:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwZDZhN2Q3ZTliMmM1MjA4OGE0YmU5ZGY3NWRiZDEwNzkwMjI5ODNmZTI5MGE4KvdMiQ==: 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: ]] 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I4MzFkZDQ1OGFkMWEzYzhkMzhkMDYzOWFlOGMwNWNiODcyZDFmMjBlYzc3NjM183x1bQ==: 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.532 nvme0n1 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:43.532 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWJkM2RkNjI4YmFmZGRlOTU1NDc5NzM2ZTZmZTkzNWTk2Wi7: 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: ]] 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQyZmRjNjdiNWE0OTQzZWM0OGZjYjFkZmFlM2UxZDjc2isO: 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.533 request: 00:21:43.533 { 00:21:43.533 "name": "nvme0", 00:21:43.533 "dhchap_key": "key2", 00:21:43.533 "dhchap_ctrlr_key": "ckey1", 00:21:43.533 "method": "bdev_nvme_set_keys", 00:21:43.533 "req_id": 1 00:21:43.533 } 00:21:43.533 Got JSON-RPC error response 00:21:43.533 response: 00:21:43.533 { 00:21:43.533 "code": -13, 00:21:43.533 "message": "Permission denied" 00:21:43.533 } 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:43.533 10:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.470 rmmod nvme_tcp 00:21:44.470 rmmod nvme_fabrics 00:21:44.470 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 93591 ']' 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 93591 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 93591 ']' 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 93591 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93591 00:21:44.730 killing process with pid 93591 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93591' 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 93591 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 93591 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:44.730 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:44.989 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:44.989 10:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:44.989 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:45.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:45.817 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:45.817 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:45.817 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fp0 /tmp/spdk.key-null.pZD /tmp/spdk.key-sha256.8s4 /tmp/spdk.key-sha384.CmZ /tmp/spdk.key-sha512.nGj /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:45.817 10:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:46.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.386 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:46.386 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:46.386 ************************************ 00:21:46.386 END TEST nvmf_auth_host 00:21:46.386 ************************************ 00:21:46.386 00:21:46.386 real 0m35.062s 00:21:46.386 user 0m32.623s 00:21:46.386 sys 0m3.860s 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.386 ************************************ 00:21:46.386 START TEST nvmf_digest 00:21:46.386 ************************************ 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:46.386 * Looking for test storage... 00:21:46.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:46.386 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:46.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.646 --rc genhtml_branch_coverage=1 00:21:46.646 --rc genhtml_function_coverage=1 00:21:46.646 --rc genhtml_legend=1 00:21:46.646 --rc geninfo_all_blocks=1 00:21:46.646 --rc geninfo_unexecuted_blocks=1 00:21:46.646 00:21:46.646 ' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:46.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.646 --rc genhtml_branch_coverage=1 00:21:46.646 --rc genhtml_function_coverage=1 00:21:46.646 --rc genhtml_legend=1 00:21:46.646 --rc geninfo_all_blocks=1 00:21:46.646 --rc geninfo_unexecuted_blocks=1 00:21:46.646 00:21:46.646 ' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:46.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.646 --rc genhtml_branch_coverage=1 00:21:46.646 --rc genhtml_function_coverage=1 00:21:46.646 --rc genhtml_legend=1 00:21:46.646 --rc geninfo_all_blocks=1 00:21:46.646 --rc geninfo_unexecuted_blocks=1 00:21:46.646 00:21:46.646 ' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:46.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.646 --rc genhtml_branch_coverage=1 00:21:46.646 --rc genhtml_function_coverage=1 00:21:46.646 --rc genhtml_legend=1 00:21:46.646 --rc geninfo_all_blocks=1 00:21:46.646 --rc geninfo_unexecuted_blocks=1 00:21:46.646 00:21:46.646 ' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.646 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.647 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:46.647 Cannot find device "nvmf_init_br" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:46.647 Cannot find device "nvmf_init_br2" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:46.647 Cannot find device "nvmf_tgt_br" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.647 Cannot find device "nvmf_tgt_br2" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:46.647 Cannot find device "nvmf_init_br" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:46.647 Cannot find device "nvmf_init_br2" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:46.647 Cannot find device "nvmf_tgt_br" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:46.647 Cannot find device "nvmf_tgt_br2" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:46.647 Cannot find device "nvmf_br" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:46.647 Cannot find device "nvmf_init_if" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:46.647 Cannot find device "nvmf_init_if2" 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.647 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.907 10:35:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:46.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:46.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:21:46.907 00:21:46.907 --- 10.0.0.3 ping statistics --- 00:21:46.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.907 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:46.907 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:46.907 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:46.907 00:21:46.907 --- 10.0.0.4 ping statistics --- 00:21:46.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.907 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:46.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:46.907 00:21:46.907 --- 10.0.0.1 ping statistics --- 00:21:46.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.907 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:46.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:21:46.907 00:21:46.907 --- 10.0.0.2 ping statistics --- 00:21:46.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.907 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:46.907 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:47.166 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:47.167 ************************************ 00:21:47.167 START TEST nvmf_digest_clean 00:21:47.167 ************************************ 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=95230 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 95230 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95230 ']' 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.167 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:47.167 [2024-12-10 10:35:22.216492] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:47.167 [2024-12-10 10:35:22.216585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.167 [2024-12-10 10:35:22.357480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.426 [2024-12-10 10:35:22.399861] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.426 [2024-12-10 10:35:22.399925] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.426 [2024-12-10 10:35:22.399940] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.427 [2024-12-10 10:35:22.399950] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.427 [2024-12-10 10:35:22.399959] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.427 [2024-12-10 10:35:22.399999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:47.427 [2024-12-10 10:35:22.558954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.427 null0 00:21:47.427 [2024-12-10 10:35:22.594265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.427 [2024-12-10 10:35:22.618301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95259 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95259 /var/tmp/bperf.sock 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95259 ']' 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.427 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:47.686 [2024-12-10 10:35:22.687081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:47.686 [2024-12-10 10:35:22.687563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95259 ] 00:21:47.686 [2024-12-10 10:35:22.827001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.686 [2024-12-10 10:35:22.868976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.945 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.946 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:47.946 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:47.946 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:47.946 10:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:48.205 [2024-12-10 10:35:23.252019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:48.205 10:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.205 10:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.464 nvme0n1 00:21:48.464 10:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:48.464 10:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.723 Running I/O for 2 seconds... 00:21:50.595 17780.00 IOPS, 69.45 MiB/s [2024-12-10T10:35:25.822Z] 17780.00 IOPS, 69.45 MiB/s 00:21:50.595 Latency(us) 00:21:50.595 [2024-12-10T10:35:25.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.595 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:50.595 nvme0n1 : 2.01 17774.39 69.43 0.00 0.00 7197.17 6613.18 17396.83 00:21:50.595 [2024-12-10T10:35:25.822Z] =================================================================================================================== 00:21:50.595 [2024-12-10T10:35:25.822Z] Total : 17774.39 69.43 0.00 0.00 7197.17 6613.18 17396.83 00:21:50.595 { 00:21:50.595 "results": [ 00:21:50.595 { 00:21:50.595 "job": "nvme0n1", 00:21:50.595 "core_mask": "0x2", 00:21:50.595 "workload": "randread", 00:21:50.595 "status": "finished", 00:21:50.595 "queue_depth": 128, 00:21:50.595 "io_size": 4096, 00:21:50.595 "runtime": 2.007833, 00:21:50.595 "iops": 17774.386614823045, 00:21:50.595 "mibps": 69.43119771415252, 00:21:50.595 "io_failed": 0, 00:21:50.595 "io_timeout": 0, 00:21:50.595 "avg_latency_us": 7197.1705716207125, 00:21:50.595 "min_latency_us": 6613.178181818182, 00:21:50.595 "max_latency_us": 17396.82909090909 00:21:50.595 } 00:21:50.595 ], 00:21:50.595 "core_count": 1 00:21:50.595 } 00:21:50.595 10:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:50.595 10:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:50.595 10:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:50.595 10:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:50.595 10:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:50.595 | select(.opcode=="crc32c") 00:21:50.595 | "\(.module_name) \(.executed)"' 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95259 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95259 ']' 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95259 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.855 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95259 00:21:51.114 killing process with pid 95259 00:21:51.114 Received shutdown signal, test time was about 2.000000 seconds 00:21:51.114 00:21:51.114 Latency(us) 00:21:51.114 [2024-12-10T10:35:26.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.114 [2024-12-10T10:35:26.341Z] =================================================================================================================== 00:21:51.114 [2024-12-10T10:35:26.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95259' 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95259 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95259 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95303 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95303 /var/tmp/bperf.sock 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95303 ']' 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:51.114 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.115 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:51.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:51.115 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.115 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:51.115 [2024-12-10 10:35:26.286143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:51.115 [2024-12-10 10:35:26.286454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95303 ] 00:21:51.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:51.115 Zero copy mechanism will not be used. 00:21:51.374 [2024-12-10 10:35:26.423128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.374 [2024-12-10 10:35:26.457626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.374 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.374 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:51.374 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:51.374 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:51.374 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:51.633 [2024-12-10 10:35:26.740300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.633 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:51.633 10:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:51.892 nvme0n1 00:21:51.892 10:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:51.892 10:35:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:52.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:52.151 Zero copy mechanism will not be used. 00:21:52.151 Running I/O for 2 seconds... 00:21:54.050 8624.00 IOPS, 1078.00 MiB/s [2024-12-10T10:35:29.277Z] 8680.00 IOPS, 1085.00 MiB/s 00:21:54.050 Latency(us) 00:21:54.050 [2024-12-10T10:35:29.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.050 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:54.050 nvme0n1 : 2.00 8676.11 1084.51 0.00 0.00 1841.30 1645.85 3589.59 00:21:54.050 [2024-12-10T10:35:29.277Z] =================================================================================================================== 00:21:54.050 [2024-12-10T10:35:29.277Z] Total : 8676.11 1084.51 0.00 0.00 1841.30 1645.85 3589.59 00:21:54.050 { 00:21:54.050 "results": [ 00:21:54.050 { 00:21:54.050 "job": "nvme0n1", 00:21:54.050 "core_mask": "0x2", 00:21:54.050 "workload": "randread", 00:21:54.050 "status": "finished", 00:21:54.050 "queue_depth": 16, 00:21:54.050 "io_size": 131072, 00:21:54.050 "runtime": 2.002741, 00:21:54.050 "iops": 8676.109392078157, 00:21:54.050 "mibps": 1084.5136740097696, 00:21:54.050 "io_failed": 0, 00:21:54.050 "io_timeout": 0, 00:21:54.050 "avg_latency_us": 1841.3028227021598, 00:21:54.050 "min_latency_us": 1645.8472727272726, 00:21:54.050 "max_latency_us": 3589.5854545454545 00:21:54.051 } 00:21:54.051 ], 00:21:54.051 "core_count": 1 00:21:54.051 } 00:21:54.051 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:54.051 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:54.051 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:54.051 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:54.051 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:54.051 | select(.opcode=="crc32c") 00:21:54.051 | "\(.module_name) \(.executed)"' 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95303 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95303 ']' 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95303 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95303 00:21:54.310 killing process with pid 95303 00:21:54.310 Received shutdown signal, test time was about 2.000000 seconds 00:21:54.310 00:21:54.310 Latency(us) 00:21:54.310 [2024-12-10T10:35:29.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.310 [2024-12-10T10:35:29.537Z] =================================================================================================================== 00:21:54.310 [2024-12-10T10:35:29.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95303' 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95303 00:21:54.310 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95303 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95355 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95355 /var/tmp/bperf.sock 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95355 ']' 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:54.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.569 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:54.569 [2024-12-10 10:35:29.689809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:54.569 [2024-12-10 10:35:29.690078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95355 ] 00:21:54.829 [2024-12-10 10:35:29.822019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.829 [2024-12-10 10:35:29.856366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.829 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.829 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:54.829 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:54.829 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:54.829 10:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:55.089 [2024-12-10 10:35:30.222369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:55.089 10:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:55.089 10:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:55.398 nvme0n1 00:21:55.399 10:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:55.399 10:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:55.657 Running I/O for 2 seconds... 00:21:57.531 19178.00 IOPS, 74.91 MiB/s [2024-12-10T10:35:32.758Z] 21678.00 IOPS, 84.68 MiB/s 00:21:57.531 Latency(us) 00:21:57.531 [2024-12-10T10:35:32.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:57.531 nvme0n1 : 2.01 21680.72 84.69 0.00 0.00 5893.35 3738.53 14179.61 00:21:57.531 [2024-12-10T10:35:32.758Z] =================================================================================================================== 00:21:57.531 [2024-12-10T10:35:32.758Z] Total : 21680.72 84.69 0.00 0.00 5893.35 3738.53 14179.61 00:21:57.531 { 00:21:57.531 "results": [ 00:21:57.531 { 00:21:57.531 "job": "nvme0n1", 00:21:57.531 "core_mask": "0x2", 00:21:57.531 "workload": "randwrite", 00:21:57.531 "status": "finished", 00:21:57.531 "queue_depth": 128, 00:21:57.531 "io_size": 4096, 00:21:57.531 "runtime": 2.005284, 00:21:57.531 "iops": 21680.71953897802, 00:21:57.531 "mibps": 84.6903106991329, 00:21:57.531 "io_failed": 0, 00:21:57.531 "io_timeout": 0, 00:21:57.531 "avg_latency_us": 5893.351207019129, 00:21:57.531 "min_latency_us": 3738.530909090909, 00:21:57.531 "max_latency_us": 14179.607272727273 00:21:57.531 } 00:21:57.531 ], 00:21:57.531 "core_count": 1 00:21:57.531 } 00:21:57.531 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:57.531 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:57.531 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:57.531 | select(.opcode=="crc32c") 00:21:57.531 | "\(.module_name) \(.executed)"' 00:21:57.531 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:57.531 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95355 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95355 ']' 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95355 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.790 10:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95355 00:21:58.049 killing process with pid 95355 00:21:58.049 Received shutdown signal, test time was about 2.000000 seconds 00:21:58.049 00:21:58.049 Latency(us) 00:21:58.049 [2024-12-10T10:35:33.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.049 [2024-12-10T10:35:33.276Z] =================================================================================================================== 00:21:58.049 [2024-12-10T10:35:33.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95355' 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95355 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95355 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95403 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95403 /var/tmp/bperf.sock 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95403 ']' 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:58.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.049 10:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:58.049 [2024-12-10 10:35:33.216471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:58.049 [2024-12-10 10:35:33.216772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95403 ] 00:21:58.049 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:58.049 Zero copy mechanism will not be used. 00:21:58.309 [2024-12-10 10:35:33.347838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.309 [2024-12-10 10:35:33.380929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:59.246 [2024-12-10 10:35:34.423564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:59.246 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:59.814 nvme0n1 00:21:59.814 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:59.814 10:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:59.814 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:59.814 Zero copy mechanism will not be used. 00:21:59.814 Running I/O for 2 seconds... 00:22:01.686 7357.00 IOPS, 919.62 MiB/s [2024-12-10T10:35:36.913Z] 7368.00 IOPS, 921.00 MiB/s 00:22:01.686 Latency(us) 00:22:01.686 [2024-12-10T10:35:36.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.686 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:01.686 nvme0n1 : 2.00 7363.67 920.46 0.00 0.00 2167.88 1921.40 11498.59 00:22:01.686 [2024-12-10T10:35:36.913Z] =================================================================================================================== 00:22:01.686 [2024-12-10T10:35:36.913Z] Total : 7363.67 920.46 0.00 0.00 2167.88 1921.40 11498.59 00:22:01.686 { 00:22:01.686 "results": [ 00:22:01.686 { 00:22:01.686 "job": "nvme0n1", 00:22:01.686 "core_mask": "0x2", 00:22:01.686 "workload": "randwrite", 00:22:01.686 "status": "finished", 00:22:01.686 "queue_depth": 16, 00:22:01.686 "io_size": 131072, 00:22:01.686 "runtime": 2.00335, 00:22:01.686 "iops": 7363.6658596850275, 00:22:01.686 "mibps": 920.4582324606284, 00:22:01.686 "io_failed": 0, 00:22:01.686 "io_timeout": 0, 00:22:01.686 "avg_latency_us": 2167.8843305068035, 00:22:01.686 "min_latency_us": 1921.3963636363637, 00:22:01.686 "max_latency_us": 11498.58909090909 00:22:01.686 } 00:22:01.686 ], 00:22:01.686 "core_count": 1 00:22:01.686 } 00:22:01.686 10:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:01.686 10:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:01.686 10:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:01.686 10:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:01.686 10:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:01.686 | select(.opcode=="crc32c") 00:22:01.686 | "\(.module_name) \(.executed)"' 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95403 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95403 ']' 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95403 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.945 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95403 00:22:02.204 killing process with pid 95403 00:22:02.204 Received shutdown signal, test time was about 2.000000 seconds 00:22:02.204 00:22:02.204 Latency(us) 00:22:02.204 [2024-12-10T10:35:37.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.204 [2024-12-10T10:35:37.431Z] =================================================================================================================== 00:22:02.204 [2024-12-10T10:35:37.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95403' 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95403 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95403 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95230 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95230 ']' 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95230 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95230 00:22:02.204 killing process with pid 95230 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95230' 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95230 00:22:02.204 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95230 00:22:02.464 ************************************ 00:22:02.464 END TEST nvmf_digest_clean 00:22:02.464 ************************************ 00:22:02.464 00:22:02.464 real 0m15.337s 00:22:02.464 user 0m29.891s 00:22:02.464 sys 0m4.453s 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:02.464 ************************************ 00:22:02.464 START TEST nvmf_digest_error 00:22:02.464 ************************************ 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=95487 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 95487 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95487 ']' 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.464 10:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:02.464 [2024-12-10 10:35:37.594179] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:02.464 [2024-12-10 10:35:37.594447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.723 [2024-12-10 10:35:37.728230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.723 [2024-12-10 10:35:37.759007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.723 [2024-12-10 10:35:37.759055] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.723 [2024-12-10 10:35:37.759080] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.723 [2024-12-10 10:35:37.759087] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.723 [2024-12-10 10:35:37.759093] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.723 [2024-12-10 10:35:37.759127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.660 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.660 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:03.660 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:03.660 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.660 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:03.661 [2024-12-10 10:35:38.591552] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:03.661 [2024-12-10 10:35:38.625076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:03.661 null0 00:22:03.661 [2024-12-10 10:35:38.655326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.661 [2024-12-10 10:35:38.679447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95519 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95519 /var/tmp/bperf.sock 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95519 ']' 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:03.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.661 10:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:03.661 [2024-12-10 10:35:38.776280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:03.661 [2024-12-10 10:35:38.776686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95519 ] 00:22:03.920 [2024-12-10 10:35:38.931256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.921 [2024-12-10 10:35:38.972841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.921 [2024-12-10 10:35:39.005824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:03.921 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.921 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:03.921 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:03.921 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:04.180 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:04.180 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.180 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.180 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.180 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:04.180 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:04.439 nvme0n1 00:22:04.439 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:04.439 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.439 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.439 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.439 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:04.439 10:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.698 Running I/O for 2 seconds... 00:22:04.698 [2024-12-10 10:35:39.687750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.698 [2024-12-10 10:35:39.687814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.698 [2024-12-10 10:35:39.687829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.698 [2024-12-10 10:35:39.701871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.698 [2024-12-10 10:35:39.701905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.698 [2024-12-10 10:35:39.701933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.698 [2024-12-10 10:35:39.716121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.698 [2024-12-10 10:35:39.716156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.698 [2024-12-10 10:35:39.716183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.698 [2024-12-10 10:35:39.730452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.698 [2024-12-10 10:35:39.730667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.698 [2024-12-10 10:35:39.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.698 [2024-12-10 10:35:39.746696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.698 [2024-12-10 10:35:39.746731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.698 [2024-12-10 10:35:39.746744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.698 [2024-12-10 10:35:39.763555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.698 [2024-12-10 10:35:39.763592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.763660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.778897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.779103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.779119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.793696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.793899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.793917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.808160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.808347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.808379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.822449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.822483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.822511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.836693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.836730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.836742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.850775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.850827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.850854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.864908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.864940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.864968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.879564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.879813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.879831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.894123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.894159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.894187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.908411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.908470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.908499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.699 [2024-12-10 10:35:39.923053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.699 [2024-12-10 10:35:39.923086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.699 [2024-12-10 10:35:39.923113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.958 [2024-12-10 10:35:39.938049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.958 [2024-12-10 10:35:39.938084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.958 [2024-12-10 10:35:39.938112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.958 [2024-12-10 10:35:39.952358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.958 [2024-12-10 10:35:39.952573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.958 [2024-12-10 10:35:39.952606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.958 [2024-12-10 10:35:39.966909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.958 [2024-12-10 10:35:39.966944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.958 [2024-12-10 10:35:39.966972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.958 [2024-12-10 10:35:39.981026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.958 [2024-12-10 10:35:39.981059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.958 [2024-12-10 10:35:39.981086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.958 [2024-12-10 10:35:39.995121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.958 [2024-12-10 10:35:39.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.958 [2024-12-10 10:35:39.995182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.010794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.010830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.010858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.027668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.027846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.027866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.042745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.042927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.042964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.057654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.057851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.057883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.072187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.072381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.072399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.086587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.086619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.086647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.100591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.100624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.100652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.114602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.114634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.114661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.128765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.128797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.128824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.142966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.142999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.143026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.157312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.157346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.157373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.959 [2024-12-10 10:35:40.171401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:04.959 [2024-12-10 10:35:40.171460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.959 [2024-12-10 10:35:40.171489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.186315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.186568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.186601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.201124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.201158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.201186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.215307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.215340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.215367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.229428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.229460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.229487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.243403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.243434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.243461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.257691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.257726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.257753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.271732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.218 [2024-12-10 10:35:40.271767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.218 [2024-12-10 10:35:40.271796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.218 [2024-12-10 10:35:40.285811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.285843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.285870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.299830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.299864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.299892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.313882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.313914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.313941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.328319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.328352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.328379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.342747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.342913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.342946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.357162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.357198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.357225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.371309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.371342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.371386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.385818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.385850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.385877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.399902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.399936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.399964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.413983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.414016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.414043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.428468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.428500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.428527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.219 [2024-12-10 10:35:40.443043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.219 [2024-12-10 10:35:40.443076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.219 [2024-12-10 10:35:40.443103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.457767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.457799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.457826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.471912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.472143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.472174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.486173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.486390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.486596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.500974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.501191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.501320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.515535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.515761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.515915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.530329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.530560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.530693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.544989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.545202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.545320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.559375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.559628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.559771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.574101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.574317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.574531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.588881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.589093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.589210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.609295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.609519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.609536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.623385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.623442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.623469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.637458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.637490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.637517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.652851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.652886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.652913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 17332.00 IOPS, 67.70 MiB/s [2024-12-10T10:35:40.706Z] [2024-12-10 10:35:40.669714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.669781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.669794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.479 [2024-12-10 10:35:40.684948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.479 [2024-12-10 10:35:40.685135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.479 [2024-12-10 10:35:40.685168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.700300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.700522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.700540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.716316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.716549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.716674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.732103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.732304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.732515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.747781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.748034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.748154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.764305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.764529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.764655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.781893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.782096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.782268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.798682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.798886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.799033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.814029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.814247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.814370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.829539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.829756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.829881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.845075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.845293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.845485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.860348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.860612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.860737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.875679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.875865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.875913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.889969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.890004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.890031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.904461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.904495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.904522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.918519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.918550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.918577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.932622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.932653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.932680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.946614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.946647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.946674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.739 [2024-12-10 10:35:40.961410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:05.739 [2024-12-10 10:35:40.961486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.739 [2024-12-10 10:35:40.961499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:40.976524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:40.976556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:40.976583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:40.990594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:40.990626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:40.990653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.004603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.004636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.004663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.018424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.018455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.018482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.032383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.032443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.032472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.046268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.046301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.046328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.060265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.060297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.060324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.074205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.074238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.074265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.088238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.088271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.088297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.102225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.102451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.102469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.116498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.116531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.116558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.130335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.130368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.130395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.144605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.144637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.144664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.158443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.158475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.158501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.172357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.172389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.172445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.186308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.186341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.186368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.200397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.200473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.200486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.214419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.214450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.214478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.005 [2024-12-10 10:35:41.229173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.005 [2024-12-10 10:35:41.229208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.005 [2024-12-10 10:35:41.229236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.244065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.244097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.244124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.258119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.258152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.258179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.272152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.272184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.272211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.286097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.286129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.286156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.300169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.300202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.300229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.314171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.314204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.314232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.328202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.328234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.328261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.342443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.342475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.342503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.356308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.266 [2024-12-10 10:35:41.356340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.266 [2024-12-10 10:35:41.356367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.266 [2024-12-10 10:35:41.370243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.370276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.370303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.384293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.384325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.384352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.398472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.398505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.398532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.412683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.412716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.412744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.426657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.426689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.426716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.440649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.440680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.440707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.454506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.454537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.454564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.468515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.468546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.468574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.267 [2024-12-10 10:35:41.482523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.267 [2024-12-10 10:35:41.482554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.267 [2024-12-10 10:35:41.482581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.497881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.497915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.497942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.512454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.512531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.512544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.526493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.526691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.526707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.546727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.546760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.546788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.560698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.560730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.560758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.574607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.574639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.574667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.588500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.588558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.602349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.602563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.602595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.616693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.616728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.630800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.630832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.630860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.644807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.644840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.644868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 [2024-12-10 10:35:41.658736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.658769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.658797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 17458.00 IOPS, 68.20 MiB/s [2024-12-10T10:35:41.753Z] [2024-12-10 10:35:41.674030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f905a0) 00:22:06.526 [2024-12-10 10:35:41.674063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-12-10 10:35:41.674091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.526 00:22:06.526 Latency(us) 00:22:06.526 [2024-12-10T10:35:41.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.527 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:06.527 nvme0n1 : 2.01 17441.99 68.13 0.00 0.00 7333.35 6702.55 26929.34 00:22:06.527 [2024-12-10T10:35:41.754Z] =================================================================================================================== 00:22:06.527 [2024-12-10T10:35:41.754Z] Total : 17441.99 68.13 0.00 0.00 7333.35 6702.55 26929.34 00:22:06.527 { 00:22:06.527 "results": [ 00:22:06.527 { 00:22:06.527 "job": "nvme0n1", 00:22:06.527 "core_mask": "0x2", 00:22:06.527 "workload": "randread", 00:22:06.527 "status": "finished", 00:22:06.527 "queue_depth": 128, 00:22:06.527 "io_size": 4096, 00:22:06.527 "runtime": 2.009174, 00:22:06.527 "iops": 17441.99357546932, 00:22:06.527 "mibps": 68.13278740417704, 00:22:06.527 "io_failed": 0, 00:22:06.527 "io_timeout": 0, 00:22:06.527 "avg_latency_us": 7333.348835645578, 00:22:06.527 "min_latency_us": 6702.545454545455, 00:22:06.527 "max_latency_us": 26929.33818181818 00:22:06.527 } 00:22:06.527 ], 00:22:06.527 "core_count": 1 00:22:06.527 } 00:22:06.527 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:06.527 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:06.527 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:06.527 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:06.527 | .driver_specific 00:22:06.527 | .nvme_error 00:22:06.527 | .status_code 00:22:06.527 | .command_transient_transport_error' 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95519 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95519 ']' 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95519 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95519 00:22:06.786 killing process with pid 95519 00:22:06.786 Received shutdown signal, test time was about 2.000000 seconds 00:22:06.786 00:22:06.786 Latency(us) 00:22:06.786 [2024-12-10T10:35:42.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.786 [2024-12-10T10:35:42.013Z] =================================================================================================================== 00:22:06.786 [2024-12-10T10:35:42.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95519' 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95519 00:22:06.786 10:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95519 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95572 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95572 /var/tmp/bperf.sock 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95572 ']' 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.046 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:07.046 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:07.046 Zero copy mechanism will not be used. 00:22:07.046 [2024-12-10 10:35:42.168136] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:07.046 [2024-12-10 10:35:42.168224] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95572 ] 00:22:07.305 [2024-12-10 10:35:42.297764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.305 [2024-12-10 10:35:42.330863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.305 [2024-12-10 10:35:42.358750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:07.305 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.305 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:07.305 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:07.305 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:07.564 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:07.564 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.564 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:07.564 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.564 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:07.564 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:07.822 nvme0n1 00:22:07.822 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:07.822 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.822 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:07.822 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.822 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:07.822 10:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:08.083 Zero copy mechanism will not be used. 00:22:08.083 Running I/O for 2 seconds... 00:22:08.083 [2024-12-10 10:35:43.067153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.067214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.067228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.071087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.071123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.071151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.075054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.075260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.075277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.079300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.079365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.083204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.083240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.083268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.087369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.087452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.087466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.091584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.091663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.091677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.095746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.095783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.095811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.100248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.100286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.100315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.104599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.104635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.104664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.108870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.108906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.108934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.113315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.113351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.113379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.117665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.117702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.117731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.121779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.121814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.121842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.125863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.125898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.125925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.129903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.129939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.129967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.133907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.133942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.133970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.138090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.138127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.138155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.142207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.142242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.142271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.146175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.146210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.146238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.150134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.150170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.150198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.154305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.154342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.154385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.158266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.158302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.158330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.162430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.162465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.162492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.166364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.166443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.166457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.170428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.083 [2024-12-10 10:35:43.170462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.083 [2024-12-10 10:35:43.170489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.083 [2024-12-10 10:35:43.174598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.174633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.174645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.178693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.178729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.178757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.182682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.182717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.182744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.186616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.186651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.186678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.190566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.190601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.190628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.194712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.194747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.194775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.198764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.198799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.198827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.202779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.202815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.202842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.206783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.206819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.206847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.210820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.210873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.210901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.214978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.215013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.215042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.219007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.219043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.219071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.223142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.223180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.223208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.227287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.227323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.227351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.231530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.231565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.231592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.235540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.235574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.235626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.239553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.239587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.239655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.243524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.243558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.243586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.247490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.247524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.251696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.251733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.251746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.255656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.255693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.255706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.259545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.259578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.259628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.263470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.263503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.263531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.267440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.267473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.267500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.271710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.271749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.271778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.275683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.275719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.275748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.279749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.279785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.279814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.084 [2024-12-10 10:35:43.283698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.084 [2024-12-10 10:35:43.283736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.084 [2024-12-10 10:35:43.283765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.085 [2024-12-10 10:35:43.287704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.085 [2024-12-10 10:35:43.287743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.085 [2024-12-10 10:35:43.287757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.085 [2024-12-10 10:35:43.291607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.085 [2024-12-10 10:35:43.291674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.085 [2024-12-10 10:35:43.291686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.085 [2024-12-10 10:35:43.295494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.085 [2024-12-10 10:35:43.295528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.085 [2024-12-10 10:35:43.295555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.085 [2024-12-10 10:35:43.299643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.085 [2024-12-10 10:35:43.299684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.085 [2024-12-10 10:35:43.299697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.085 [2024-12-10 10:35:43.304116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.085 [2024-12-10 10:35:43.304149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.085 [2024-12-10 10:35:43.304177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.308661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.308696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.308723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.312919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.312970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.313013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.317205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.317239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.317266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.321142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.321176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.321203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.325136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.325170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.325197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.329221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.329255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.329283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.333196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.333230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.333258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.337161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.337211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.337238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.341286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.341320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.341348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.345293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.345327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.349311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.349345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.349373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.353249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.353282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.353310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.357250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.357311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.361143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.361177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.361204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.365161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.365195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.365223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.369207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.369241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.369269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.373151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.373185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.373213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.377170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.377203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.377231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.381267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.381302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.381329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.385270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.385304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.385331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.389297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.389331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.389358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.393241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.393276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.393304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.397246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.346 [2024-12-10 10:35:43.397280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.346 [2024-12-10 10:35:43.397307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.346 [2024-12-10 10:35:43.401204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.401238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.401266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.405168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.405202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.405229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.409134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.409168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.409195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.413168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.413202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.413230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.417109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.417144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.417171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.421126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.421160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.421187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.425070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.425104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.425131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.429111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.429145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.429172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.433134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.433168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.433195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.437125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.437158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.437186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.441066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.441100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.441128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.445041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.445075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.445103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.449027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.449062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.449088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.452972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.453006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.453033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.456862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.456896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.456924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.460800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.460834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.460861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.464690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.464723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.464751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.468575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.468608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.468635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.472415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.472649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.472668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.476648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.476694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.476707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.480639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.480672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.480699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.484453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.484682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.484699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.488586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.488619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.488646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.492402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.492630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.492647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.496604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.496637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.496665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.500518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.500553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.500580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.504319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.504542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.347 [2024-12-10 10:35:43.504561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.347 [2024-12-10 10:35:43.508313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.347 [2024-12-10 10:35:43.508519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.508536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.512554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.512587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.512614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.516378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.516604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.516622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.520519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.520552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.520579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.524305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.524508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.524525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.528449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.528493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.528521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.532382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.532591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.536532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.536565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.536593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.540411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.540640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.540672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.544582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.544616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.544644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.548472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.548515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.548543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.552312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.552556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.556375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.556564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.556596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.560522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.560556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.560583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.564384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.564592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.564608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.348 [2024-12-10 10:35:43.569082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.348 [2024-12-10 10:35:43.569116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.348 [2024-12-10 10:35:43.569144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.573453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.573487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.573514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.577715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.577766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.577794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.581861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.581895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.585874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.585934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.589783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.589832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.589859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.593720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.593755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.593782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.597718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.597752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.597780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.601638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.601673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.601700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.605493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.605527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.605555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.609282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.609316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.609343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.613319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.613354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.613382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.617202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.617236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.617263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.621135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.621170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.621197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.625114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.625147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.625175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.629069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.609 [2024-12-10 10:35:43.629120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.609 [2024-12-10 10:35:43.629148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.609 [2024-12-10 10:35:43.633082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.633117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.633144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.637049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.637083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.637110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.641013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.641047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.641074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.644924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.644959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.644987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.648838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.648871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.648899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.652785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.652818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.652845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.656709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.656743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.656770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.660503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.660535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.660562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.664384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.664614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.664631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.668565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.668599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.668627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.672359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.672581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.676450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.676662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.676695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.680568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.680602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.680629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.684396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.684637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.684655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.688525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.688559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.688586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.692336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.692548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.692581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.696432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.696620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.696653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.700613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.700648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.700661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.704493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.704525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.704552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.708366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.708571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.708588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.712554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.712588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.712615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.716368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.716592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.716608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.720543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.720576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.720603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.724364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.724552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.724584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.728447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.728685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.728702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.732891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.732926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.732953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.736813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.736846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.736874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.740829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.610 [2024-12-10 10:35:43.740862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.610 [2024-12-10 10:35:43.740889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.610 [2024-12-10 10:35:43.744681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.744714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.744741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.748547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.748580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.748607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.752527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.752560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.752587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.756551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.756585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.760542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.760576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.760604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.764590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.764623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.764649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.768476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.768509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.768537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.772396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.772640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.776470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.776502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.776530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.780388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.780613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.784596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.784629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.784657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.788451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.788689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.788706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.792606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.792639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.792666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.796556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.796616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.800431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.800662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.800679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.804612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.804645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.804672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.808454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.808656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.808674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.812660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.812695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.812722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.816527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.816587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.820321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.820545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.824630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.824696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.611 [2024-12-10 10:35:43.828967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.611 [2024-12-10 10:35:43.829003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.611 [2024-12-10 10:35:43.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.833710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.833763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.833792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.838214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.838251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.838279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.843060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.843112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.843141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.847732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.847772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.847786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.852122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.852156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.852183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.856520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.856555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.856584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.860732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.860783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.860825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.864997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.865030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.865057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.869188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.869222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.873589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.873625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.873653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.877587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.877622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.877633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.881533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.881565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.881592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.885391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.885471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.885500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.889297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.889330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.889358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.893266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.893302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.893329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.897243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.897279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.897307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.901291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.901325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.901352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.905293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.905327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.909255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.909288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.909316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.913206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.913240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.913267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.917149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.917182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.917211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.921093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.921127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.921154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.925050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.925084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.872 [2024-12-10 10:35:43.925111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.872 [2024-12-10 10:35:43.928950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.872 [2024-12-10 10:35:43.928984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.929010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.932964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.932998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.933025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.936870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.936904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.936931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.940833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.940868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.940895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.944783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.944833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.944860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.948663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.948697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.948725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.952464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.952507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.952535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.956291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.956325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.956352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.960213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.960247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.960274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.964167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.964200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.964228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.968038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.968071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.968098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.972002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.972036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.972063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.975807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.975843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.975856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.979624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.979672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.979700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.983409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.983439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.983466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.987188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.987386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.987402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.991326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.991533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.991550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.995460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.995493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.995521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:43.999434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:43.999468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:43.999496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.003302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.003529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.003547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.007433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.007494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.011311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.011514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.011532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.015415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.015448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.015475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.019243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.019466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.019486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.023329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.023519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.023550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.027366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.027573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.027589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.031498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.031532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.031559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.035316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.873 [2024-12-10 10:35:44.035543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.873 [2024-12-10 10:35:44.035560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.873 [2024-12-10 10:35:44.039342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.039549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.039582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.043463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.043497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.043524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.047259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.047466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.047501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.051320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.051544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.055454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.055488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.055516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.059290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.059513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.059530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.874 7657.00 IOPS, 957.12 MiB/s [2024-12-10T10:35:44.101Z] [2024-12-10 10:35:44.065094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.065130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.065158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.068991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.069026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.069054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.072872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.072907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.072934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.076716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.076751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.076779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.080679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.080713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.080741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.084474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.084518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.084546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.088281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.088315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.088342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.874 [2024-12-10 10:35:44.092346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:08.874 [2024-12-10 10:35:44.092442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.874 [2024-12-10 10:35:44.092457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.096762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.096813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.096841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.100854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.100897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.100926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.105100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.105134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.105161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.109016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.109050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.109078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.113068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.113101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.113129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.116995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.117028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.117056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.121011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.121045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.121072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.124974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.125035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.128883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.128917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.128944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.132849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.132883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.132910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.136731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.136766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.136793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.140648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.140710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.144584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.144617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.144645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.148437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.148515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.148529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.152390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.152450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.152479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.156220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.156254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.156281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.160102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.160135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.160162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.164108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.164142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.164169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.168035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.168069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.168097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.172065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.172098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.172126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.176061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.176094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.176122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.180078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.180112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.180139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.184093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.184127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.184154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.188008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.188042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.188070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.191869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.191905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.191917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.195805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.195843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.195871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.135 [2024-12-10 10:35:44.199765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.135 [2024-12-10 10:35:44.199801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.135 [2024-12-10 10:35:44.199813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.203595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.203671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.203684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.207489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.207523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.207552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.211360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.211587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.211628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.215537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.215572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.215622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.219318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.219546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.219563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.223504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.223538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.223566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.227285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.227489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.227505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.231486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.231519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.231546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.235242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.235470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.239254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.239498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.243473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.243507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.243534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.247343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.247569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.247585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.251582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.251660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.251674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.255722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.255762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.255775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.259787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.259823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.259851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.263746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.263783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.263796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.267532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.267564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.267591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.271385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.271429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.271456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.275228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.275450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.275469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.279381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.279583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.279624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.283528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.283562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.283589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.287510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.287543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.287571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.291385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.291429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.291456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.295225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.295452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.295470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.299281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.299483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.299501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.303678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.303718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.303731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.307827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.307866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.136 [2024-12-10 10:35:44.307879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.136 [2024-12-10 10:35:44.312110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.136 [2024-12-10 10:35:44.312146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.312175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.316237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.316273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.316300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.320903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.320940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.320970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.325364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.325444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.325474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.329804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.329855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.329882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.334038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.334074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.334101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.338316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.338352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.338379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.342616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.342651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.342678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.346721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.346772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.346799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.350953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.350987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.351015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.137 [2024-12-10 10:35:44.355250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.137 [2024-12-10 10:35:44.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.137 [2024-12-10 10:35:44.355329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.359947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.360013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.360040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.364141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.364175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.364202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.368382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.368443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.368471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.372629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.372663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.372691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.376586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.376620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.376647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.380648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.380683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.380711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.384697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.384731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.384758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.388880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.388916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.388944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.392931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.392967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.392994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.396953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.396988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.397015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.401017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.401052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.401080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.397 [2024-12-10 10:35:44.405033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.397 [2024-12-10 10:35:44.405068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.397 [2024-12-10 10:35:44.405095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.409325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.409361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.409388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.413459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.413495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.413523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.417392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.417452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.417480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.421310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.421346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.421373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.425309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.425345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.425372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.429373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.429432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.429461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.433371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.433448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.433477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.437471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.437533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.441460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.441496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.441524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.445631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.445669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.445682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.449582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.449617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.449645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.453508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.453542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.453569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.457440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.457475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.457503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.461363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.461600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.461618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.465765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.465846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.469834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.469885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.469912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.473805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.473869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.473897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.477817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.477867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.477894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.481849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.481899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.481926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.485961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.486011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.486039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.489942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.489992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.490019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.494044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.494095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.494123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.498069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.498120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.498147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.502347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.502425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.502440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.506320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.506369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.506397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.510326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.510377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.510405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.514448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.514497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.398 [2024-12-10 10:35:44.514524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.398 [2024-12-10 10:35:44.518768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.398 [2024-12-10 10:35:44.518805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.518833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.522830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.522879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.522905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.526879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.526927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.526954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.530715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.530763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.534617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.534666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.534693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.538502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.538550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.538577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.542356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.542430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.542443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.546149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.546197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.546223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.550184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.550219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.550247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.554089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.554139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.554165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.558120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.558168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.558196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.562081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.562146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.562172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.566008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.566057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.566085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.569988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.570037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.570064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.573959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.574008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.574034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.577836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.577884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.577911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.581789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.581852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.581880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.585666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.585716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.585743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.589526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.589574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.589602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.593334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.593384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.593435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.597242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.597291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.597319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.601183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.601232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.601259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.605173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.605222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.605248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.609060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.609109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.609136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.613073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.613122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.613148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.399 [2024-12-10 10:35:44.617122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.399 [2024-12-10 10:35:44.617158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.399 [2024-12-10 10:35:44.617185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.621636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.621687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.621714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.625698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.625747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.625774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.629906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.629955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.629981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.633826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.633874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.633902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.637743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.637791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.637818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.641621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.641672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.641699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.645573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.645620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.645647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.649403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.649463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.649490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.653309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.653358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.653385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.657163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.657211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.657238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.661097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.661146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.661173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.665027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.665075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.665103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.668972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.669021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.669048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.660 [2024-12-10 10:35:44.672856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.660 [2024-12-10 10:35:44.672904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.660 [2024-12-10 10:35:44.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.676760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.676809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.676850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.680581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.680631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.680658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.684427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.684486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.684514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.688280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.688329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.688356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.692176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.692224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.692251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.696092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.696126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.696153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.700145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.700186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.700200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.704076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.704123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.704150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.708033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.708081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.708108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.711856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.711909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.711922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.715701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.715738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.715750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.719528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.719561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.719588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.723487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.723521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.723548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.727561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.727596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.727663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.731348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.731423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.731435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.735166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.735214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.735242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.739164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.739212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.739239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.743067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.743116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.743143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.746926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.746970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.746998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.750752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.750800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.750827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.754865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.754943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.758872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.758922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.758949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.762987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.763053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.763081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.766918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.766966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.766992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.770788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.770836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.770863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.774637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.774685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.774712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.778498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.778546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.661 [2024-12-10 10:35:44.778573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.661 [2024-12-10 10:35:44.782410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.661 [2024-12-10 10:35:44.782468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.782495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.786312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.786361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.786388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.790215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.790264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.790291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.794072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.794121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.794147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.797998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.798046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.798073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.801968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.802016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.802044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.805908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.805957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.805984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.809897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.809945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.809973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.813778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.813826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.813853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.817667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.817715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.817743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.821533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.821581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.821609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.825399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.825458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.825486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.829416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.829478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.829506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.833321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.833372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.833414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.837182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.837231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.837259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.841016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.841065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.841092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.844966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.845014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.845041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.849039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.849104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.849131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.853378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.853455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.853484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.857622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.857658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.857687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.862058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.862121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.862149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.866688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.866755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.866783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.871214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.871263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.871290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.875522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.875558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.875586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.662 [2024-12-10 10:35:44.879747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.662 [2024-12-10 10:35:44.879802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-10 10:35:44.879815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.884626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.884678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.884706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.888744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.888793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.888821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.893151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.893202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.893229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.897191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.897243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.897255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.901184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.901234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.901262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.905246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.905297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.905324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.909355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.909428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.909440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.913278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.913327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.913355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.917214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.917263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.917290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.921154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.921203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.921230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.925141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.925189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.925217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.929078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.929127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.929154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.933127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.933176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.933203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.937029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.937077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.923 [2024-12-10 10:35:44.937104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.923 [2024-12-10 10:35:44.940990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.923 [2024-12-10 10:35:44.941038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.941066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.945047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.945095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.945122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.948990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.949039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.949066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.952867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.952915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.952943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.956743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.956791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.956818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.960583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.960632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.960659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.964435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.964493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.964520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.968314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.968363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.968389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.972203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.972251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.972278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.976013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.976062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.976089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.979923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.980019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.980046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.983831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.983882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.983910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.987748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.987785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.987813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.991481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.991529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.991556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.995372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.995448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:44.999292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:44.999342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:44.999369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.003187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.003236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.003263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.007082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.007130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.007158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.010908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.010957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.010983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.014895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.014944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.014971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.018636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.018686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.018713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.022462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.022511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.022538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.026270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.026319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.026346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.030220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.030269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.030296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.034202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.034251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.034278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.038152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.038201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.038228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.042027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.042078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.042106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.045975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.924 [2024-12-10 10:35:45.046024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.924 [2024-12-10 10:35:45.046052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.924 [2024-12-10 10:35:45.049755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.925 [2024-12-10 10:35:45.049803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.925 [2024-12-10 10:35:45.049830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.925 [2024-12-10 10:35:45.053656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.925 [2024-12-10 10:35:45.053705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.925 [2024-12-10 10:35:45.053733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.925 [2024-12-10 10:35:45.057519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.925 [2024-12-10 10:35:45.057567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.925 [2024-12-10 10:35:45.057595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.925 7696.00 IOPS, 962.00 MiB/s [2024-12-10T10:35:45.152Z] [2024-12-10 10:35:45.062697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2429220) 00:22:09.925 [2024-12-10 10:35:45.062746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.925 [2024-12-10 10:35:45.062774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.925 00:22:09.925 Latency(us) 00:22:09.925 [2024-12-10T10:35:45.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.925 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:09.925 nvme0n1 : 2.00 7694.40 961.80 0.00 0.00 2076.11 1653.29 5659.93 00:22:09.925 [2024-12-10T10:35:45.152Z] =================================================================================================================== 00:22:09.925 [2024-12-10T10:35:45.152Z] Total : 7694.40 961.80 0.00 0.00 2076.11 1653.29 5659.93 00:22:09.925 { 00:22:09.925 "results": [ 00:22:09.925 { 00:22:09.925 "job": "nvme0n1", 00:22:09.925 "core_mask": "0x2", 00:22:09.925 "workload": "randread", 00:22:09.925 "status": "finished", 00:22:09.925 "queue_depth": 16, 00:22:09.925 "io_size": 131072, 00:22:09.925 "runtime": 2.002495, 00:22:09.925 "iops": 7694.401234460011, 00:22:09.925 "mibps": 961.8001543075014, 00:22:09.925 "io_failed": 0, 00:22:09.925 "io_timeout": 0, 00:22:09.925 "avg_latency_us": 2076.105382799962, 00:22:09.925 "min_latency_us": 1653.2945454545454, 00:22:09.925 "max_latency_us": 5659.927272727273 00:22:09.925 } 00:22:09.925 ], 00:22:09.925 "core_count": 1 00:22:09.925 } 00:22:09.925 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:09.925 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:09.925 | .driver_specific 00:22:09.925 | .nvme_error 00:22:09.925 | .status_code 00:22:09.925 | .command_transient_transport_error' 00:22:09.925 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:09.925 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 497 > 0 )) 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95572 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95572 ']' 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95572 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95572 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:10.184 killing process with pid 95572 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95572' 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95572 00:22:10.184 Received shutdown signal, test time was about 2.000000 seconds 00:22:10.184 00:22:10.184 Latency(us) 00:22:10.184 [2024-12-10T10:35:45.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.184 [2024-12-10T10:35:45.411Z] =================================================================================================================== 00:22:10.184 [2024-12-10T10:35:45.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.184 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95572 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95614 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95614 /var/tmp/bperf.sock 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95614 ']' 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.442 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:10.442 [2024-12-10 10:35:45.550202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:10.442 [2024-12-10 10:35:45.550318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95614 ] 00:22:10.701 [2024-12-10 10:35:45.691343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.701 [2024-12-10 10:35:45.724827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.701 [2024-12-10 10:35:45.752761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:10.702 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.702 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:10.702 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:10.702 10:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:10.961 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:10.961 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.961 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:10.961 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.961 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:10.961 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:11.220 nvme0n1 00:22:11.220 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:11.220 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.220 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:11.479 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.479 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:11.479 10:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:11.479 Running I/O for 2 seconds... 00:22:11.479 [2024-12-10 10:35:46.562636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fef90 00:22:11.479 [2024-12-10 10:35:46.565075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.479 [2024-12-10 10:35:46.565129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.479 [2024-12-10 10:35:46.577019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:11.479 [2024-12-10 10:35:46.579351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.479 [2024-12-10 10:35:46.579419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:11.479 [2024-12-10 10:35:46.590688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fe2e8 00:22:11.479 [2024-12-10 10:35:46.592955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.479 [2024-12-10 10:35:46.593003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:11.479 [2024-12-10 10:35:46.604746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fda78 00:22:11.480 [2024-12-10 10:35:46.607079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.607128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.618514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fd208 00:22:11.480 [2024-12-10 10:35:46.620729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.620761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.631752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fc998 00:22:11.480 [2024-12-10 10:35:46.633892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.633923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.645212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fc128 00:22:11.480 [2024-12-10 10:35:46.647402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.647471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.658662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fb8b8 00:22:11.480 [2024-12-10 10:35:46.660822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.660853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.672017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fb048 00:22:11.480 [2024-12-10 10:35:46.674223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.674249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.685371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198fa7d8 00:22:11.480 [2024-12-10 10:35:46.687527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.687765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:11.480 [2024-12-10 10:35:46.698989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f9f68 00:22:11.480 [2024-12-10 10:35:46.701597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.480 [2024-12-10 10:35:46.701825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:11.739 [2024-12-10 10:35:46.715664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f96f8 00:22:11.739 [2024-12-10 10:35:46.718179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.739 [2024-12-10 10:35:46.718296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:11.739 [2024-12-10 10:35:46.731294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f8e88 00:22:11.739 [2024-12-10 10:35:46.733910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.739 [2024-12-10 10:35:46.734136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:11.739 [2024-12-10 10:35:46.747130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f8618 00:22:11.739 [2024-12-10 10:35:46.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.739 [2024-12-10 10:35:46.749747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:11.739 [2024-12-10 10:35:46.762163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f7da8 00:22:11.739 [2024-12-10 10:35:46.764776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.739 [2024-12-10 10:35:46.764980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:11.739 [2024-12-10 10:35:46.777275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f7538 00:22:11.739 [2024-12-10 10:35:46.779649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.739 [2024-12-10 10:35:46.779825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:11.739 [2024-12-10 10:35:46.792045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f6cc8 00:22:11.739 [2024-12-10 10:35:46.794207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.794432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.806701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f6458 00:22:11.740 [2024-12-10 10:35:46.808973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.809175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.821604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f5be8 00:22:11.740 [2024-12-10 10:35:46.823816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.824052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.836422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f5378 00:22:11.740 [2024-12-10 10:35:46.838649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.838808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.851078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f4b08 00:22:11.740 [2024-12-10 10:35:46.853344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.853376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.865679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f4298 00:22:11.740 [2024-12-10 10:35:46.867665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.867845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.880524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f3a28 00:22:11.740 [2024-12-10 10:35:46.882473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.882653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.895727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f31b8 00:22:11.740 [2024-12-10 10:35:46.897837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.897872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.911397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f2948 00:22:11.740 [2024-12-10 10:35:46.913480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.913515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.926223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f20d8 00:22:11.740 [2024-12-10 10:35:46.928337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.928368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.940960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f1868 00:22:11.740 [2024-12-10 10:35:46.942845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.942875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:11.740 [2024-12-10 10:35:46.954700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f0ff8 00:22:11.740 [2024-12-10 10:35:46.956888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.740 [2024-12-10 10:35:46.956918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:11.999 [2024-12-10 10:35:46.969864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198f0788 00:22:11.999 [2024-12-10 10:35:46.971655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.999 [2024-12-10 10:35:46.971690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:11.999 [2024-12-10 10:35:46.983212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198eff18 00:22:12.000 [2024-12-10 10:35:46.985125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:46.985155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:46.996884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ef6a8 00:22:12.000 [2024-12-10 10:35:46.998680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:46.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.010321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198eee38 00:22:12.000 [2024-12-10 10:35:47.012193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.012223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.023969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ee5c8 00:22:12.000 [2024-12-10 10:35:47.025743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.025776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.037420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198edd58 00:22:12.000 [2024-12-10 10:35:47.039156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.039187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.050852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ed4e8 00:22:12.000 [2024-12-10 10:35:47.052650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.052680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.064257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ecc78 00:22:12.000 [2024-12-10 10:35:47.066262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.066293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.077986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ec408 00:22:12.000 [2024-12-10 10:35:47.079713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.079890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.091809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ebb98 00:22:12.000 [2024-12-10 10:35:47.093519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.093551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.105465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198eb328 00:22:12.000 [2024-12-10 10:35:47.107337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.119249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198eaab8 00:22:12.000 [2024-12-10 10:35:47.120996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.121027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.132865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198ea248 00:22:12.000 [2024-12-10 10:35:47.134534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.134565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.146967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e99d8 00:22:12.000 [2024-12-10 10:35:47.148635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.148666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.160504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e9168 00:22:12.000 [2024-12-10 10:35:47.162080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.162111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.174195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e88f8 00:22:12.000 [2024-12-10 10:35:47.175917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.176129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.188113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e8088 00:22:12.000 [2024-12-10 10:35:47.189678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.189711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.201534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e7818 00:22:12.000 [2024-12-10 10:35:47.203041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.203071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:12.000 [2024-12-10 10:35:47.215010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e6fa8 00:22:12.000 [2024-12-10 10:35:47.216584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.000 [2024-12-10 10:35:47.216616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.229872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e6738 00:22:12.260 [2024-12-10 10:35:47.231343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.243265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e5ec8 00:22:12.260 [2024-12-10 10:35:47.244901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.244948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.256895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e5658 00:22:12.260 [2024-12-10 10:35:47.258357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.258388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.270316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e4de8 00:22:12.260 [2024-12-10 10:35:47.271835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.271868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.283795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e4578 00:22:12.260 [2024-12-10 10:35:47.285562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.285594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.297557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e3d08 00:22:12.260 [2024-12-10 10:35:47.299214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.299260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.311705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e3498 00:22:12.260 [2024-12-10 10:35:47.313373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.313444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.325576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e2c28 00:22:12.260 [2024-12-10 10:35:47.326968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.326999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.339187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e23b8 00:22:12.260 [2024-12-10 10:35:47.340682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.340714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.352716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e1b48 00:22:12.260 [2024-12-10 10:35:47.354069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.354099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.366222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e12d8 00:22:12.260 [2024-12-10 10:35:47.367655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.367689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.379714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e0a68 00:22:12.260 [2024-12-10 10:35:47.381038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.381068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.393161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e01f8 00:22:12.260 [2024-12-10 10:35:47.394528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.394581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.406702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198df988 00:22:12.260 [2024-12-10 10:35:47.408335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.408367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.420565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198df118 00:22:12.260 [2024-12-10 10:35:47.421851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.260 [2024-12-10 10:35:47.421881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:12.260 [2024-12-10 10:35:47.434044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198de8a8 00:22:12.261 [2024-12-10 10:35:47.435262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.261 [2024-12-10 10:35:47.435293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:12.261 [2024-12-10 10:35:47.447548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198de038 00:22:12.261 [2024-12-10 10:35:47.448840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.261 [2024-12-10 10:35:47.448870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:12.261 [2024-12-10 10:35:47.466570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198de038 00:22:12.261 [2024-12-10 10:35:47.468890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.261 [2024-12-10 10:35:47.468921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.261 [2024-12-10 10:35:47.480062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198de8a8 00:22:12.261 [2024-12-10 10:35:47.482725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.261 [2024-12-10 10:35:47.482755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:12.520 [2024-12-10 10:35:47.494789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198df118 00:22:12.520 [2024-12-10 10:35:47.497020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.520 [2024-12-10 10:35:47.497050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:12.520 [2024-12-10 10:35:47.508322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198df988 00:22:12.520 [2024-12-10 10:35:47.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.520 [2024-12-10 10:35:47.510582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:12.520 [2024-12-10 10:35:47.521872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e01f8 00:22:12.520 [2024-12-10 10:35:47.524101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.520 [2024-12-10 10:35:47.524282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:12.520 [2024-12-10 10:35:47.535670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198e0a68 00:22:12.520 [2024-12-10 10:35:47.538183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.520 [2024-12-10 10:35:47.538217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:12.520 17965.00 IOPS, 70.18 MiB/s [2024-12-10T10:35:47.747Z] [2024-12-10 10:35:47.547214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.520 [2024-12-10 10:35:47.547624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.520 [2024-12-10 10:35:47.547799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.520 [2024-12-10 10:35:47.557985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.558346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.558594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.569486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.569912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.570069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.580840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.581200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.581483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.592082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.592460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.592702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.603194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.603581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.603837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.614468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.614842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.615058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.625510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.625874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.625895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.636421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.636595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.636615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.647081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.647264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.647284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.657776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.657976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.657995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.668653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.668854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.668873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.679275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.679483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.679502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.690056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.690230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.690249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.700849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.701024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.701043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.711482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.711704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.711725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.722242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.722417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.722464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.733035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.733224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.733242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.521 [2024-12-10 10:35:47.744414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.521 [2024-12-10 10:35:47.744655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.521 [2024-12-10 10:35:47.744677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.756057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.756234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.756253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.767214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.767403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.767423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.778056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.778264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.788804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.789009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.789028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.799339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.799572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.799592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.810039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.810212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.810231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.820699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.820877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.820896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.831172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.831348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.831367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.841920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.842096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.852628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.852803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.852823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.863105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.863282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.863301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.874019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.874192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.874211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.884709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.884885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.884904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.895456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.895716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.895736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.907108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.907288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.907307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.919773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.920024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.920050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.932870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.933059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.781 [2024-12-10 10:35:47.933084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.781 [2024-12-10 10:35:47.945391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.781 [2024-12-10 10:35:47.945635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.782 [2024-12-10 10:35:47.945697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.782 [2024-12-10 10:35:47.957342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.782 [2024-12-10 10:35:47.957602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.782 [2024-12-10 10:35:47.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.782 [2024-12-10 10:35:47.969190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.782 [2024-12-10 10:35:47.969376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.782 [2024-12-10 10:35:47.969395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.782 [2024-12-10 10:35:47.980525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.782 [2024-12-10 10:35:47.980749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.782 [2024-12-10 10:35:47.980774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.782 [2024-12-10 10:35:47.991574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.782 [2024-12-10 10:35:47.991781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.782 [2024-12-10 10:35:47.991801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:12.782 [2024-12-10 10:35:48.003175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:12.782 [2024-12-10 10:35:48.003372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.782 [2024-12-10 10:35:48.003391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.041 [2024-12-10 10:35:48.015193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.041 [2024-12-10 10:35:48.015389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.041 [2024-12-10 10:35:48.015409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.041 [2024-12-10 10:35:48.026681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.041 [2024-12-10 10:35:48.026860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.041 [2024-12-10 10:35:48.026880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.041 [2024-12-10 10:35:48.037968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.041 [2024-12-10 10:35:48.038323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.041 [2024-12-10 10:35:48.038343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.041 [2024-12-10 10:35:48.049638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.041 [2024-12-10 10:35:48.049873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.041 [2024-12-10 10:35:48.049893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.041 [2024-12-10 10:35:48.060901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.041 [2024-12-10 10:35:48.061266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.041 [2024-12-10 10:35:48.061297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.041 [2024-12-10 10:35:48.072454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.041 [2024-12-10 10:35:48.072842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.084062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.084413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.084433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.095591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.095818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.095838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.106572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.106940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.106960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.117547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.117920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.117945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.128921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.129104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.129123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.139878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.140289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.140309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.150889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.151233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.151252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.161900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.162244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.162265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.172827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.173163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.173183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.183580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.183987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.184008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.194491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.194682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.194701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.205021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.205197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.205216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.215640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.215827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.215847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.226159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.226560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.226586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.237142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.237530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.237551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.248433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.248837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.248857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.042 [2024-12-10 10:35:48.259189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.042 [2024-12-10 10:35:48.259373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.042 [2024-12-10 10:35:48.259391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.271177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.271381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.271413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.281950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.282126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.282145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.292677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.292854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.292872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.303261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.303466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.303501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.313900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.314073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.314092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.324598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.324777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.324797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.335099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.335272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.335290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.345837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.346014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.346033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.356534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.356715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.356733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.367023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.367376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.367395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.378097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.378504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.389385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.389763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.302 [2024-12-10 10:35:48.389989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.302 [2024-12-10 10:35:48.400369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.302 [2024-12-10 10:35:48.400767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.400937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.411823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.412225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.412376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.422736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.423122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.423350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.434170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.434554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.434711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.444953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.445310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.445578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.455874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.456240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.456478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.466746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.467088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.467108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.477693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.478034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.478055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.488559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.488900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.488919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.499366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.499551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.499570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.509927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.510110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.510129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.303 [2024-12-10 10:35:48.520593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.303 [2024-12-10 10:35:48.520775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.303 [2024-12-10 10:35:48.520794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.629 [2024-12-10 10:35:48.532569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.630 [2024-12-10 10:35:48.532801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.630 [2024-12-10 10:35:48.532825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.630 20470.50 IOPS, 79.96 MiB/s [2024-12-10T10:35:48.857Z] [2024-12-10 10:35:48.547160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605430) with pdu=0x2000198feb58 00:22:13.630 [2024-12-10 10:35:48.547559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.630 [2024-12-10 10:35:48.547583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.630 00:22:13.630 Latency(us) 00:22:13.630 [2024-12-10T10:35:48.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.630 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:13.630 nvme0n1 : 2.01 20459.19 79.92 0.00 0.00 6241.82 3991.74 26929.34 00:22:13.630 [2024-12-10T10:35:48.857Z] =================================================================================================================== 00:22:13.630 [2024-12-10T10:35:48.857Z] Total : 20459.19 79.92 0.00 0.00 6241.82 3991.74 26929.34 00:22:13.630 { 00:22:13.630 "results": [ 00:22:13.630 { 00:22:13.630 "job": "nvme0n1", 00:22:13.630 "core_mask": "0x2", 00:22:13.630 "workload": "randwrite", 00:22:13.630 "status": "finished", 00:22:13.630 "queue_depth": 128, 00:22:13.630 "io_size": 4096, 00:22:13.630 "runtime": 2.007362, 00:22:13.630 "iops": 20459.189722630996, 00:22:13.630 "mibps": 79.91870985402733, 00:22:13.630 "io_failed": 0, 00:22:13.630 "io_timeout": 0, 00:22:13.630 "avg_latency_us": 6241.816968782027, 00:22:13.630 "min_latency_us": 3991.7381818181816, 00:22:13.630 "max_latency_us": 26929.33818181818 00:22:13.630 } 00:22:13.630 ], 00:22:13.630 "core_count": 1 00:22:13.630 } 00:22:13.630 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:13.630 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:13.630 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:13.630 | .driver_specific 00:22:13.630 | .nvme_error 00:22:13.630 | .status_code 00:22:13.630 | .command_transient_transport_error' 00:22:13.630 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95614 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95614 ']' 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95614 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95614 00:22:13.888 killing process with pid 95614 00:22:13.888 Received shutdown signal, test time was about 2.000000 seconds 00:22:13.888 00:22:13.888 Latency(us) 00:22:13.888 [2024-12-10T10:35:49.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.888 [2024-12-10T10:35:49.115Z] =================================================================================================================== 00:22:13.888 [2024-12-10T10:35:49.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:13.888 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95614' 00:22:13.889 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95614 00:22:13.889 10:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95614 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95668 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95668 /var/tmp/bperf.sock 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95668 ']' 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:13.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.889 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:13.889 [2024-12-10 10:35:49.065748] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:13.889 [2024-12-10 10:35:49.066047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:22:13.889 Zero copy mechanism will not be used. 00:22:13.889 llocations --file-prefix=spdk_pid95668 ] 00:22:14.148 [2024-12-10 10:35:49.198860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.148 [2024-12-10 10:35:49.232968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.148 [2024-12-10 10:35:49.260968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:14.148 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.148 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:14.148 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:14.148 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:14.407 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:14.407 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.407 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:14.407 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.407 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.407 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.976 nvme0n1 00:22:14.976 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:14.976 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.976 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:14.976 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.976 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:14.976 10:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:14.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:14.976 Zero copy mechanism will not be used. 00:22:14.976 Running I/O for 2 seconds... 00:22:14.976 [2024-12-10 10:35:50.058333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.058664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.058693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.063040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.063314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.063341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.067704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.068034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.068059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.072430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.072746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.072778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.077219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.077503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.077529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.081682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.081947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.081973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.086123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.086387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.086427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.090600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.090863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.090888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.095038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.095499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.095545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.099754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.976 [2024-12-10 10:35:50.100044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-10 10:35:50.100070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.976 [2024-12-10 10:35:50.104192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.104474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.104509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.108638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.108900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.108926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.113047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.113310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.113336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.117522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.117803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.117828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.121895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.122159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.122185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.126362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.126648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.126673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.130715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.130978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.131004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.135122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.135385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.139580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.139910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.139937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.144112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.144392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.144427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.148684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.148964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.148989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.153071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.153331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.153357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.157580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.157844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.161949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.162383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.162429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.166575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.166838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.170941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.171206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.175439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.175744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.175769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.179803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.180108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.180132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.184391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.184772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.184811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.188921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.977 [2024-12-10 10:35:50.189183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-10 10:35:50.189209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.977 [2024-12-10 10:35:50.193302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.978 [2024-12-10 10:35:50.193771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.978 [2024-12-10 10:35:50.193802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.978 [2024-12-10 10:35:50.198338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:14.978 [2024-12-10 10:35:50.198655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.978 [2024-12-10 10:35:50.198694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.238 [2024-12-10 10:35:50.203415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.203783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.203815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.208503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.208764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.208789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.212842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.213274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.213305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.217599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.222008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.222269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.222294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.226576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.226837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.226862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.230892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.231158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.231183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.235264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.235579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.235632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.239936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.240382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.240426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.244496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.244795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.244821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.248941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.249207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.249233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.253370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.253690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.253720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.257938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.258204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.258229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.262322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.262599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.262633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.266662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.266926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.266951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.271102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.271365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.271390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.275679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.275974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.276020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.280151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.280413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.280447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.284642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.284903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.284927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.289145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.289451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.289486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.294123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.294618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.294666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.299192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.299513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.299540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.304268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.304638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.304665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.309236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.309547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.309569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.314138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.314623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.314670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.319242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.319564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.319596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.324101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.324369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.239 [2024-12-10 10:35:50.324423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.239 [2024-12-10 10:35:50.328970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.239 [2024-12-10 10:35:50.329246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.329272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.333703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.333983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.338414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.338689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.338716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.342958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.343228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.343255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.347510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.347836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.352132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.352403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.352440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.356967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.357412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.357451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.361716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.361986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.362013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.366277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.366599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.366631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.370930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.371197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.371223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.375747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.376061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.376086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.380403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.380773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.380811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.385091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.385368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.385407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.389731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.390009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.390036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.394325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.394692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.399094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.399551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.399582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.403822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.404095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.404120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.408410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.408760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.408823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.412979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.413248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.413274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.417702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.417970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.417996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.422242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.422565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.422596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.426971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.427416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.427455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.431721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.432019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.432045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.436516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.436787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.436813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.440941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.441211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.441238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.445579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.445838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.445863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.450157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.450463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.450490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.240 [2024-12-10 10:35:50.454905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.240 [2024-12-10 10:35:50.455347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.240 [2024-12-10 10:35:50.455379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.241 [2024-12-10 10:35:50.459944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.241 [2024-12-10 10:35:50.460229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.241 [2024-12-10 10:35:50.460255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.465040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.465330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.465355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.470013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.470306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.470363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.474747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.475016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.475043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.479394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.479738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.484150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.484450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.484488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.489040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.489312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.489337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.493581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.493850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.493875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.498201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.498499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.498526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.502989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.503316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.503342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.507812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.508110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.508135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.512497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.512833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.512879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.517199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.517655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.517687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.521904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.522166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.522192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.526441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.526704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.526728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.530760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.531022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.531048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.535320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.535679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.535711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.539911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.540213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.540238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.544332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.501 [2024-12-10 10:35:50.544625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.501 [2024-12-10 10:35:50.544650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.501 [2024-12-10 10:35:50.548859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.549120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.553288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.553577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.553615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.557747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.558007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.558033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.562146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.562402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.562438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.566694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.567008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.567036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.571319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.571657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.571685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.576034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.576290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.576316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.580422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.580696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.580721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.584778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.585037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.585062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.589132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.589389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.589429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.593545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.593801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.593826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.597950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.598211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.598236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.602328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.602600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.602625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.606862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.607123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.607148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.611291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.611572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.611605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.615979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.616262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.616287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.620604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.620868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.620893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.625128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.625393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.625426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.629629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.629894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.629919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.634085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.634350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.634375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.638645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.638912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.638938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.643138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.643404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.643439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.647693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.647977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.648017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.652330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.652601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.652626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.656716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.656975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.657001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.661188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.661495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.661549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.665604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.665863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.665888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.670024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.502 [2024-12-10 10:35:50.670283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.502 [2024-12-10 10:35:50.670308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.502 [2024-12-10 10:35:50.674505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.674765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.674790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.678912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.679211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.679239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.683460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.683772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.683798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.688029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.688289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.688314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.692544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.692854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.692881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.696963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.697221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.697246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.701461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.701717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.701742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.705844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.706103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.706127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.710319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.710588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.710613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.714692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.714948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.714973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.719172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.719450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.719475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.503 [2024-12-10 10:35:50.724163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.503 [2024-12-10 10:35:50.724448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.503 [2024-12-10 10:35:50.724483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.728926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.729198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.729240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.733705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.733962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.733987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.738178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.738445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.738470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.742605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.742861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.742886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.746970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.747230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.747255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.751609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.751928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.751970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.756180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.756456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.756491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.760679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.760937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.760962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.765075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.765333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.765358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.769551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.769809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.769834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.773956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.774211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.774237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.778328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.778597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.778623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.782696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.782955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.782980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.786987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.787244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.787270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.791289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.791575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.791652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.795745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.796047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.800338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.800616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.800642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.804856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.805138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.805162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.809417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.809695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.809721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.813783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.814040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.814061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.818189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.818522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.818560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.822666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.822941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.822967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.827168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.827425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.827460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.831605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.831885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.831910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.835979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.836283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.836308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.840458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.764 [2024-12-10 10:35:50.840730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.764 [2024-12-10 10:35:50.840755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.764 [2024-12-10 10:35:50.844857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.845113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.845137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.849438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.849708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.849732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.853809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.854066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.858147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.858404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.858438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.862555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.862813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.862838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.866866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.867127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.867153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.871676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.871955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.871976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.876353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.876698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.876726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.881089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.881393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.881415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.885584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.885851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.885878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.889999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.890288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.890325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.894464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.894752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.894795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.898882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.899140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.899166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.903360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.903693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.903720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.907886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.908176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.908202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.912408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.912676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.912701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.916786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.917111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.917138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.921378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.921649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.921674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.925746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.926009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.926034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.930174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.930491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.930513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.934607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.934866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.934890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.939003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.939260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.939286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.943445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.943754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.943781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.947876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.948138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.948164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.952381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.952661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.952687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.956825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.957081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.957106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.961315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.961588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.961613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.965694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.765 [2024-12-10 10:35:50.965952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.765 [2024-12-10 10:35:50.965978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.765 [2024-12-10 10:35:50.970130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.766 [2024-12-10 10:35:50.970387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.766 [2024-12-10 10:35:50.970421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.766 [2024-12-10 10:35:50.974667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.766 [2024-12-10 10:35:50.974927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.766 [2024-12-10 10:35:50.974953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.766 [2024-12-10 10:35:50.979139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.766 [2024-12-10 10:35:50.979443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.766 [2024-12-10 10:35:50.979469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.766 [2024-12-10 10:35:50.984010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:15.766 [2024-12-10 10:35:50.984309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.766 [2024-12-10 10:35:50.984352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:50.989583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:50.989928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:50.989955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:50.995040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:50.995355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:50.995381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.000483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.000821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.000876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.005346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.005675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.005703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.010116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.010377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.010443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.015000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.015261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.015286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.019795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.020149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.024555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.024818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.024845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.028865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.029123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.029149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.025 [2024-12-10 10:35:51.033256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.025 [2024-12-10 10:35:51.033546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.025 [2024-12-10 10:35:51.033572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.037681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.037940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.037966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.042119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.042376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.042411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.046536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.046815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.046840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 6727.00 IOPS, 840.88 MiB/s [2024-12-10T10:35:51.253Z] [2024-12-10 10:35:51.052390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.052665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.056763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.057007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.057033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.061166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.061456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.061477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.065625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.065919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.065946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.070133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.070392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.070430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.074708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.074951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.074977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.079117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.079373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.079413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.083587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.083888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.083928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.088137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.088395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.088416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.092630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.092945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.092973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.097245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.097522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.097549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.101779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.102044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.102070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.106235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.106504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.106530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.110627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.110886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.110912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.114931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.115188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.115213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.119395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.119701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.119727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.123858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.124151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.124177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.128291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.128560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.128585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.132777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.133055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.133081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.137202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.137487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.137513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.141693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.141951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.141976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.146059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.146318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.146345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.150500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.150757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.150781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.154825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.155082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.026 [2024-12-10 10:35:51.155106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.026 [2024-12-10 10:35:51.159230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.026 [2024-12-10 10:35:51.159516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.159542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.163788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.164104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.164129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.168339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.168612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.168637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.172916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.173175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.173201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.177299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.177571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.177597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.181744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.182003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.182028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.186082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.186340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.186366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.190537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.190797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.190823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.194872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.195129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.195155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.199379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.203848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.204142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.204166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.208361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.208643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.208668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.212785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.213043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.213068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.217143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.217400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.217434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.221670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.221926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.221951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.226150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.226409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.226442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.230563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.230847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.234878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.235139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.235164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.239326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.239596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.239660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.243732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.244015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.244041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.027 [2024-12-10 10:35:51.248575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.027 [2024-12-10 10:35:51.248895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.027 [2024-12-10 10:35:51.248921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.253500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.253761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.253787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.258188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.258476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.258502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.262665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.262912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.262937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.266976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.267233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.267259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.271331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.271660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.271688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.275657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.275920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.280279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.280571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.280597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.284731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.284987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.285012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.289140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.289396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.289429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.293544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.293802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.293828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.297927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.298185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.287 [2024-12-10 10:35:51.298209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.287 [2024-12-10 10:35:51.302411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.287 [2024-12-10 10:35:51.302696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.302721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.306777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.307039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.307065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.311164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.311422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.311474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.315524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.315842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.315868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.319950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.320239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.324469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.324739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.324764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.328758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.329017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.329043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.333306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.333597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.333624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.337796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.338055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.338080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.342116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.342373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.342406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.346519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.346775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.346799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.351005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.351262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.351287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.355368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.355673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.355699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.359843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.360119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.360144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.364327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.364594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.364620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.368654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.368911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.368937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.373031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.373288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.373313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.377551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.377808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.377833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.382036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.382304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.382330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.386696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.386966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.386994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.392395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.392772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.392814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.398115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.398390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.398424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.402977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.403239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.403265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.407397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.407709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.407735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.412020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.412280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.412306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.416454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.416726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.420913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.421170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.421195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.425445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.288 [2024-12-10 10:35:51.425704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.288 [2024-12-10 10:35:51.425729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.288 [2024-12-10 10:35:51.429782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.430039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.430065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.434136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.434398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.434432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.438560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.438821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.438846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.442852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.443112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.447229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.447518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.447543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.451823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.452127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.452152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.456292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.456565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.456590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.460724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.460984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.465118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.465377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.469611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.469870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.469895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.474095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.474353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.474379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.478506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.478764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.478789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.482831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.483088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.483113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.487259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.487549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.487577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.491713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.492020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.492044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.496209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.496467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.496501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.500716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.501001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.505077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.505357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.505382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.289 [2024-12-10 10:35:51.510399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.289 [2024-12-10 10:35:51.510776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.289 [2024-12-10 10:35:51.510832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.549 [2024-12-10 10:35:51.515593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.549 [2024-12-10 10:35:51.515961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.549 [2024-12-10 10:35:51.516002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.520890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.521205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.521232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.526045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.526338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.526364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.531054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.531329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.536067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.536333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.536359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.540989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.541253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.541279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.545674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.545957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.545982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.550376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.550685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.550710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.554980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.555243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.559728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.560076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.560102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.564300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.564618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.564645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.568837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.569108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.569129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.573504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.573827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.573856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.578261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.578555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.578581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.582822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.583084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.583110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.587261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.587570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.587633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.591904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.592199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.592225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.596624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.596923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.596949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.601186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.601478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.601505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.605815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.606095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.610309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.610587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.610612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.614795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.615147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.615176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.619499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.619845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.624121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.624402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.624437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.628601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.628863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.628889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.633124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.633396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.633435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.637887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.638174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.638200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.550 [2024-12-10 10:35:51.642601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.550 [2024-12-10 10:35:51.642873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.550 [2024-12-10 10:35:51.642899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.647057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.647320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.647345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.651591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.651922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.651977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.656446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.656723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.656748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.660935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.661202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.661228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.665408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.665673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.665699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.669806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.670069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.670095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.674604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.674867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.674892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.679002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.679267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.679293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.683452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.683764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.683791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.687971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.688252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.688278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.692718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.692983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.693010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.697237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.697512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.697538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.701892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.702160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.702186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.706496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.706838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.706867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.711074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.711338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.711364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.715548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.715891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.715918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.720229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.720531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.720558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.725078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.725342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.725368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.729782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.730039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.730064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.734326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.734615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.734640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.738776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.739031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.739056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.743150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.743420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.743444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.747547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.747851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.747878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.752116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.752374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.752424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.756744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.757022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.757047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.761117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.761374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.761408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.765756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.551 [2024-12-10 10:35:51.766015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.551 [2024-12-10 10:35:51.766041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.551 [2024-12-10 10:35:51.770355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.552 [2024-12-10 10:35:51.770701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.552 [2024-12-10 10:35:51.770727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.775453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.775843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.775872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.780322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.780685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.780712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.784830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.785087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.785113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.789258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.789549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.789576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.793862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.794125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.794151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.798385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.798674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.798700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.802806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.803063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.803090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.807258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.807546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.807572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.811688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.812017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.812043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.816144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.816418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.816440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.820599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.820926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.820953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.825170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.825453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.812 [2024-12-10 10:35:51.825479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.812 [2024-12-10 10:35:51.829680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.812 [2024-12-10 10:35:51.829936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.829962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.834006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.834263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.834289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.838434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.838693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.838718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.842746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.843005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.843031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.847246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.847547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.847573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.851783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.852081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.852106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.856260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.856551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.856576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.860602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.860879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.860904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.865039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.865296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.865320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.869520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.869780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.869805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.874011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.874271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.874297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.878414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.878674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.878699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.882778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.883038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.883064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.887120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.887376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.887409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.891469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.891780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.891806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.896258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.896569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.896597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.901222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.901552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.901578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.905896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.906172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.906197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.910481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.910745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.910770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.914953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.915216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.915242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.919335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.919630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.923751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.924058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.928372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.928688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.932811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.933067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.937282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.937581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.937607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.941682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.941960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.941985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.946031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.946287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.946312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.950559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.950819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.813 [2024-12-10 10:35:51.950844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.813 [2024-12-10 10:35:51.954877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.813 [2024-12-10 10:35:51.955132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.955157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.959274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.959561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.959586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.963723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.964020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.964044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.968162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.968418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.968451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.972650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.972908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.972933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.977089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.977349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.977374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.981572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.981852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.981877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.985957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.986213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.986238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.990317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.990588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.990614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.994761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.995024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.995050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:51.999252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:51.999521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:51.999545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.004067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.004331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.004356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.009056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.009323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.009348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.013972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.014244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.014271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.018901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.019182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.019209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.024076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.024353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.024378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.028994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.029250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.029275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.814 [2024-12-10 10:35:52.034130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:16.814 [2024-12-10 10:35:52.034423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.814 [2024-12-10 10:35:52.034492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:17.074 [2024-12-10 10:35:52.039268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:17.074 [2024-12-10 10:35:52.039584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.074 [2024-12-10 10:35:52.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:17.074 [2024-12-10 10:35:52.044594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x605770) with pdu=0x2000198fef90 00:22:17.074 [2024-12-10 10:35:52.044901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.074 [2024-12-10 10:35:52.044942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:17.074 6765.50 IOPS, 845.69 MiB/s 00:22:17.074 Latency(us) 00:22:17.074 [2024-12-10T10:35:52.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.074 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:17.074 nvme0n1 : 2.00 6762.00 845.25 0.00 0.00 2361.15 1921.40 9770.82 00:22:17.074 [2024-12-10T10:35:52.301Z] =================================================================================================================== 00:22:17.074 [2024-12-10T10:35:52.301Z] Total : 6762.00 845.25 0.00 0.00 2361.15 1921.40 9770.82 00:22:17.074 { 00:22:17.074 "results": [ 00:22:17.074 { 00:22:17.074 "job": "nvme0n1", 00:22:17.074 "core_mask": "0x2", 00:22:17.074 "workload": "randwrite", 00:22:17.074 "status": "finished", 00:22:17.074 "queue_depth": 16, 00:22:17.074 "io_size": 131072, 00:22:17.074 "runtime": 2.003401, 00:22:17.074 "iops": 6762.001216930609, 00:22:17.074 "mibps": 845.2501521163261, 00:22:17.074 "io_failed": 0, 00:22:17.074 "io_timeout": 0, 00:22:17.074 "avg_latency_us": 2361.1455315836447, 00:22:17.074 "min_latency_us": 1921.3963636363637, 00:22:17.074 "max_latency_us": 9770.821818181817 00:22:17.074 } 00:22:17.074 ], 00:22:17.074 "core_count": 1 00:22:17.074 } 00:22:17.074 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:17.074 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:17.074 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:17.074 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:17.074 | .driver_specific 00:22:17.074 | .nvme_error 00:22:17.074 | .status_code 00:22:17.074 | .command_transient_transport_error' 00:22:17.333 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 436 > 0 )) 00:22:17.333 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95668 00:22:17.333 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95668 ']' 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95668 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95668 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:17.334 killing process with pid 95668 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95668' 00:22:17.334 Received shutdown signal, test time was about 2.000000 seconds 00:22:17.334 00:22:17.334 Latency(us) 00:22:17.334 [2024-12-10T10:35:52.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.334 [2024-12-10T10:35:52.561Z] =================================================================================================================== 00:22:17.334 [2024-12-10T10:35:52.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95668 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95668 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95487 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95487 ']' 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95487 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95487 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:17.334 killing process with pid 95487 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95487' 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95487 00:22:17.334 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95487 00:22:17.593 00:22:17.593 real 0m15.128s 00:22:17.593 user 0m28.750s 00:22:17.593 sys 0m4.323s 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:17.593 ************************************ 00:22:17.593 END TEST nvmf_digest_error 00:22:17.593 ************************************ 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.593 rmmod nvme_tcp 00:22:17.593 rmmod nvme_fabrics 00:22:17.593 rmmod nvme_keyring 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 95487 ']' 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 95487 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 95487 ']' 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 95487 00:22:17.593 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (95487) - No such process 00:22:17.593 Process with pid 95487 is not found 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 95487 is not found' 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:22:17.593 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.594 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:17.594 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.853 10:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:17.853 ************************************ 00:22:17.853 END TEST nvmf_digest 00:22:17.853 ************************************ 00:22:17.853 00:22:17.853 real 0m31.600s 00:22:17.853 user 0m58.931s 00:22:17.853 sys 0m9.223s 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.853 10:35:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.113 ************************************ 00:22:18.113 START TEST nvmf_host_multipath 00:22:18.113 ************************************ 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:18.113 * Looking for test storage... 00:22:18.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:18.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.113 --rc genhtml_branch_coverage=1 00:22:18.113 --rc genhtml_function_coverage=1 00:22:18.113 --rc genhtml_legend=1 00:22:18.113 --rc geninfo_all_blocks=1 00:22:18.113 --rc geninfo_unexecuted_blocks=1 00:22:18.113 00:22:18.113 ' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:18.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.113 --rc genhtml_branch_coverage=1 00:22:18.113 --rc genhtml_function_coverage=1 00:22:18.113 --rc genhtml_legend=1 00:22:18.113 --rc geninfo_all_blocks=1 00:22:18.113 --rc geninfo_unexecuted_blocks=1 00:22:18.113 00:22:18.113 ' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:18.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.113 --rc genhtml_branch_coverage=1 00:22:18.113 --rc genhtml_function_coverage=1 00:22:18.113 --rc genhtml_legend=1 00:22:18.113 --rc geninfo_all_blocks=1 00:22:18.113 --rc geninfo_unexecuted_blocks=1 00:22:18.113 00:22:18.113 ' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:18.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.113 --rc genhtml_branch_coverage=1 00:22:18.113 --rc genhtml_function_coverage=1 00:22:18.113 --rc genhtml_legend=1 00:22:18.113 --rc geninfo_all_blocks=1 00:22:18.113 --rc geninfo_unexecuted_blocks=1 00:22:18.113 00:22:18.113 ' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.113 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.114 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.114 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:18.373 Cannot find device "nvmf_init_br" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:18.373 Cannot find device "nvmf_init_br2" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:18.373 Cannot find device "nvmf_tgt_br" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.373 Cannot find device "nvmf_tgt_br2" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:18.373 Cannot find device "nvmf_init_br" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:18.373 Cannot find device "nvmf_init_br2" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:18.373 Cannot find device "nvmf_tgt_br" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:18.373 Cannot find device "nvmf_tgt_br2" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:18.373 Cannot find device "nvmf_br" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:18.373 Cannot find device "nvmf_init_if" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:18.373 Cannot find device "nvmf_init_if2" 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:18.373 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:18.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:18.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:22:18.633 00:22:18.633 --- 10.0.0.3 ping statistics --- 00:22:18.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.633 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:18.633 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:18.633 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:22:18.633 00:22:18.633 --- 10.0.0.4 ping statistics --- 00:22:18.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.633 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:18.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:22:18.633 00:22:18.633 --- 10.0.0.1 ping statistics --- 00:22:18.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.633 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:18.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:22:18.633 00:22:18.633 --- 10.0.0.2 ping statistics --- 00:22:18.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.633 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=95975 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 95975 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95975 ']' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.633 10:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:18.633 [2024-12-10 10:35:53.848361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:18.633 [2024-12-10 10:35:53.848471] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.892 [2024-12-10 10:35:53.990846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:18.892 [2024-12-10 10:35:54.033894] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.892 [2024-12-10 10:35:54.033957] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.892 [2024-12-10 10:35:54.033971] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.893 [2024-12-10 10:35:54.033981] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.893 [2024-12-10 10:35:54.033991] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.893 [2024-12-10 10:35:54.034160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.893 [2024-12-10 10:35:54.034174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.893 [2024-12-10 10:35:54.069230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:18.893 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:18.893 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:22:18.893 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:18.893 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.152 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:19.152 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.152 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95975 00:22:19.152 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:19.411 [2024-12-10 10:35:54.440765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.411 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:19.670 Malloc0 00:22:19.670 10:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:19.929 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.187 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:20.446 [2024-12-10 10:35:55.500607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:20.446 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:20.706 [2024-12-10 10:35:55.728699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96023 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96023 /var/tmp/bdevperf.sock 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 96023 ']' 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.706 10:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:21.643 10:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.643 10:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:22:21.643 10:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:21.903 10:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:22.162 Nvme0n1 00:22:22.162 10:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:22.421 Nvme0n1 00:22:22.680 10:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:22.680 10:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:23.617 10:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:23.617 10:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:23.876 10:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:24.135 10:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:24.135 10:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96067 00:22:24.135 10:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:24.135 10:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.703 Attaching 4 probes... 00:22:30.703 @path[10.0.0.3, 4421]: 19440 00:22:30.703 @path[10.0.0.3, 4421]: 19772 00:22:30.703 @path[10.0.0.3, 4421]: 19390 00:22:30.703 @path[10.0.0.3, 4421]: 19932 00:22:30.703 @path[10.0.0.3, 4421]: 20368 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96067 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96183 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:30.703 10:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:37.273 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:37.273 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:37.273 Attaching 4 probes... 00:22:37.273 @path[10.0.0.3, 4420]: 20546 00:22:37.273 @path[10.0.0.3, 4420]: 20796 00:22:37.273 @path[10.0.0.3, 4420]: 20813 00:22:37.273 @path[10.0.0.3, 4420]: 20561 00:22:37.273 @path[10.0.0.3, 4420]: 20780 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96183 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:37.273 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:37.532 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:37.532 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96301 00:22:37.532 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:37.532 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:44.100 Attaching 4 probes... 00:22:44.100 @path[10.0.0.3, 4421]: 15214 00:22:44.100 @path[10.0.0.3, 4421]: 20120 00:22:44.100 @path[10.0.0.3, 4421]: 20071 00:22:44.100 @path[10.0.0.3, 4421]: 20086 00:22:44.100 @path[10.0.0.3, 4421]: 20169 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96301 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:44.100 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:44.100 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:44.359 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:44.359 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96415 00:22:44.359 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:44.359 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:50.977 Attaching 4 probes... 00:22:50.977 00:22:50.977 00:22:50.977 00:22:50.977 00:22:50.977 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96415 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:50.977 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:50.977 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:51.236 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:51.236 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96529 00:22:51.236 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:51.236 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:57.804 Attaching 4 probes... 00:22:57.804 @path[10.0.0.3, 4421]: 19711 00:22:57.804 @path[10.0.0.3, 4421]: 19896 00:22:57.804 @path[10.0.0.3, 4421]: 19895 00:22:57.804 @path[10.0.0.3, 4421]: 19878 00:22:57.804 @path[10.0.0.3, 4421]: 20024 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96529 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:57.804 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:59.182 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:59.182 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96654 00:22:59.182 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:59.182 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:05.748 10:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:05.748 10:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:05.748 Attaching 4 probes... 00:23:05.748 @path[10.0.0.3, 4420]: 19549 00:23:05.748 @path[10.0.0.3, 4420]: 20024 00:23:05.748 @path[10.0.0.3, 4420]: 20072 00:23:05.748 @path[10.0.0.3, 4420]: 19921 00:23:05.748 @path[10.0.0.3, 4420]: 19933 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96654 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:05.748 [2024-12-10 10:36:40.552732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:05.748 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:12.316 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:12.316 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96828 00:23:12.316 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:12.316 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:18.900 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:18.900 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.900 Attaching 4 probes... 00:23:18.900 @path[10.0.0.3, 4421]: 19458 00:23:18.900 @path[10.0.0.3, 4421]: 19637 00:23:18.900 @path[10.0.0.3, 4421]: 19503 00:23:18.900 @path[10.0.0.3, 4421]: 19690 00:23:18.900 @path[10.0.0.3, 4421]: 19731 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96828 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96023 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 96023 ']' 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 96023 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96023 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.900 killing process with pid 96023 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96023' 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 96023 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 96023 00:23:18.900 { 00:23:18.900 "results": [ 00:23:18.900 { 00:23:18.900 "job": "Nvme0n1", 00:23:18.900 "core_mask": "0x4", 00:23:18.900 "workload": "verify", 00:23:18.900 "status": "terminated", 00:23:18.900 "verify_range": { 00:23:18.900 "start": 0, 00:23:18.900 "length": 16384 00:23:18.900 }, 00:23:18.900 "queue_depth": 128, 00:23:18.900 "io_size": 4096, 00:23:18.900 "runtime": 55.468221, 00:23:18.900 "iops": 8471.301071653263, 00:23:18.900 "mibps": 33.09101981114556, 00:23:18.900 "io_failed": 0, 00:23:18.900 "io_timeout": 0, 00:23:18.900 "avg_latency_us": 15080.07092543523, 00:23:18.900 "min_latency_us": 176.87272727272727, 00:23:18.900 "max_latency_us": 7046430.72 00:23:18.900 } 00:23:18.900 ], 00:23:18.900 "core_count": 1 00:23:18.900 } 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96023 00:23:18.900 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:18.900 [2024-12-10 10:35:55.794164] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:18.900 [2024-12-10 10:35:55.794249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96023 ] 00:23:18.900 [2024-12-10 10:35:55.923931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.900 [2024-12-10 10:35:55.959055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.900 [2024-12-10 10:35:55.988461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:18.900 [2024-12-10 10:35:57.618532] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:23:18.900 Running I/O for 90 seconds... 00:23:18.900 7956.00 IOPS, 31.08 MiB/s [2024-12-10T10:36:54.127Z] 8797.00 IOPS, 34.36 MiB/s [2024-12-10T10:36:54.127Z] 9166.00 IOPS, 35.80 MiB/s [2024-12-10T10:36:54.127Z] 9344.50 IOPS, 36.50 MiB/s [2024-12-10T10:36:54.127Z] 9410.80 IOPS, 36.76 MiB/s [2024-12-10T10:36:54.127Z] 9511.00 IOPS, 37.15 MiB/s [2024-12-10T10:36:54.127Z] 9608.29 IOPS, 37.53 MiB/s [2024-12-10T10:36:54.127Z] 9657.75 IOPS, 37.73 MiB/s [2024-12-10T10:36:54.127Z] [2024-12-10 10:36:05.880286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.900 [2024-12-10 10:36:05.880763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.900 [2024-12-10 10:36:05.880777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.880809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.880841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.880875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.880907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.880939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.880973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.880992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.881553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.881967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.881989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.882003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.882054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.882088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.882127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.882179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.901 [2024-12-10 10:36:05.882214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.882249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.882285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.882320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.901 [2024-12-10 10:36:05.882355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.901 [2024-12-10 10:36:05.882375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.882390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.882425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.882460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.882495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.882981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.882995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.902 [2024-12-10 10:36:05.883528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:18.902 [2024-12-10 10:36:05.883843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.902 [2024-12-10 10:36:05.883857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.883893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.883908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.883928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.883958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.883978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.884243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.884746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.884759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.885852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.903 [2024-12-10 10:36:05.885880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.885906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.885932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.885953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.885968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.885987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.903 [2024-12-10 10:36:05.886351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.903 [2024-12-10 10:36:05.886379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.886702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.886716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.904 [2024-12-10 10:36:05.887839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.887959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.887987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.904 [2024-12-10 10:36:05.888007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.904 [2024-12-10 10:36:05.888020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.888058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.888091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.888130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.888164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.888199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.888232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.888251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.905 [2024-12-10 10:36:05.899588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.899958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.899987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.905 [2024-12-10 10:36:05.900620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:18.905 [2024-12-10 10:36:05.900658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.900708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.900757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.900807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.900855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.900903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.900952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.900971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.901020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.906 [2024-12-10 10:36:05.901970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.901999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.906 [2024-12-10 10:36:05.902680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.906 [2024-12-10 10:36:05.902700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.902728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.902748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.902783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.902803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.902834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.902855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.902884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.902904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.902933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.902952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.902981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.903941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.903972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.906106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.906167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.906216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.907 [2024-12-10 10:36:05.906264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.907 [2024-12-10 10:36:05.906898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.907 [2024-12-10 10:36:05.906917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.906946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.906965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.907032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.907080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.907128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.907968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.907988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.908 [2024-12-10 10:36:05.908460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.908 [2024-12-10 10:36:05.908828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.908 [2024-12-10 10:36:05.908864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.908884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.908912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.908931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.908960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.908980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.909567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.909952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.909973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.909 [2024-12-10 10:36:05.910900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.909 [2024-12-10 10:36:05.910928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.909 [2024-12-10 10:36:05.910947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.910976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.910995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.910 [2024-12-10 10:36:05.911729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.911765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.911810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.911847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.911913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.911947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.911967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.911996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.912529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.912545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.914112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.914141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.914178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.914197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.914218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.910 [2024-12-10 10:36:05.914233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.910 [2024-12-10 10:36:05.914252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.914266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.914312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.914941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.914974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.914994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.911 [2024-12-10 10:36:05.915542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.915578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.915642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.915688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.915725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.915788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.911 [2024-12-10 10:36:05.915824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.911 [2024-12-10 10:36:05.915846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.915877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.915897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.915927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.915948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.915962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.915997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.916979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.916993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.912 [2024-12-10 10:36:05.917026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.912 [2024-12-10 10:36:05.917381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:18.912 [2024-12-10 10:36:05.917401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.917703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.917974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.917994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.913 [2024-12-10 10:36:05.918281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.913 [2024-12-10 10:36:05.918721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.913 [2024-12-10 10:36:05.918735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.918969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.918983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.919003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.919023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:05.920164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:05.920193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.914 9648.33 IOPS, 37.69 MiB/s [2024-12-10T10:36:54.141Z] 9718.70 IOPS, 37.96 MiB/s [2024-12-10T10:36:54.141Z] 9782.45 IOPS, 38.21 MiB/s [2024-12-10T10:36:54.141Z] 9840.25 IOPS, 38.44 MiB/s [2024-12-10T10:36:54.141Z] 9875.92 IOPS, 38.58 MiB/s [2024-12-10T10:36:54.141Z] 9911.07 IOPS, 38.72 MiB/s [2024-12-10T10:36:54.141Z] [2024-12-10 10:36:12.388000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.388378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.388970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.388996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.914 [2024-12-10 10:36:12.389012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.389045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.389085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.389152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.389186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.914 [2024-12-10 10:36:12.389219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.914 [2024-12-10 10:36:12.389238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.389252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.389286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.389319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.389962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.389984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.389998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.390032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.390098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.390131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.390164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.915 [2024-12-10 10:36:12.390197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:18.915 [2024-12-10 10:36:12.390658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.915 [2024-12-10 10:36:12.390672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.390973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.390987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.391102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.391141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.916 [2024-12-10 10:36:12.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.391953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.391988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.392248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.392263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.393175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.393235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.393250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.393277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.916 [2024-12-10 10:36:12.393302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.916 [2024-12-10 10:36:12.393330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.393346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.393388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.393444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.393502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:12.393878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.393922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.393963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.393988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.394003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.394028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.394042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.394069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.394083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.394109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.394124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.394150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.394165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:12.394191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.917 [2024-12-10 10:36:12.394205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.917 9679.13 IOPS, 37.81 MiB/s [2024-12-10T10:36:54.144Z] 9293.62 IOPS, 36.30 MiB/s [2024-12-10T10:36:54.144Z] 9339.65 IOPS, 36.48 MiB/s [2024-12-10T10:36:54.144Z] 9383.44 IOPS, 36.65 MiB/s [2024-12-10T10:36:54.144Z] 9418.42 IOPS, 36.79 MiB/s [2024-12-10T10:36:54.144Z] 9446.70 IOPS, 36.90 MiB/s [2024-12-10T10:36:54.144Z] 9472.67 IOPS, 37.00 MiB/s [2024-12-10T10:36:54.144Z] [2024-12-10 10:36:19.543198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.917 [2024-12-10 10:36:19.543794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-12-10 10:36:19.543807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.543826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.543840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.543860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.543882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.543904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.543918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.543939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.543954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.543988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.544018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.544052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.544086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.544119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.544153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.544186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.544971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.544986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.918 [2024-12-10 10:36:19.545331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:18.918 [2024-12-10 10:36:19.545570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-12-10 10:36:19.545586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.545959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.545979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.919 [2024-12-10 10:36:19.546686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.546964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.546987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.547002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.547023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.919 [2024-12-10 10:36:19.547037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.919 [2024-12-10 10:36:19.547057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.547072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.547106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.547141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.547377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.547391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.920 [2024-12-10 10:36:19.548186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.548969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.548984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.549010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.549025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.549051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.549066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.549092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.549107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:19.549133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:19.549157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:18.920 9396.27 IOPS, 36.70 MiB/s [2024-12-10T10:36:54.147Z] 8987.74 IOPS, 35.11 MiB/s [2024-12-10T10:36:54.147Z] 8613.25 IOPS, 33.65 MiB/s [2024-12-10T10:36:54.147Z] 8268.72 IOPS, 32.30 MiB/s [2024-12-10T10:36:54.147Z] 7950.69 IOPS, 31.06 MiB/s [2024-12-10T10:36:54.147Z] 7656.22 IOPS, 29.91 MiB/s [2024-12-10T10:36:54.147Z] 7382.79 IOPS, 28.84 MiB/s [2024-12-10T10:36:54.147Z] 7192.83 IOPS, 28.10 MiB/s [2024-12-10T10:36:54.147Z] 7282.40 IOPS, 28.45 MiB/s [2024-12-10T10:36:54.147Z] 7368.00 IOPS, 28.78 MiB/s [2024-12-10T10:36:54.147Z] 7448.38 IOPS, 29.10 MiB/s [2024-12-10T10:36:54.147Z] 7524.12 IOPS, 29.39 MiB/s [2024-12-10T10:36:54.147Z] 7597.53 IOPS, 29.68 MiB/s [2024-12-10T10:36:54.147Z] 7663.77 IOPS, 29.94 MiB/s [2024-12-10T10:36:54.147Z] [2024-12-10 10:36:32.955952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:32.956018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:32.956085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:32.956104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:32.956125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:32.956139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:32.956158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:32.956171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.920 [2024-12-10 10:36:32.956190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.920 [2024-12-10 10:36:32.956204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.956654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.956976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.956992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.921 [2024-12-10 10:36:32.957378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.921 [2024-12-10 10:36:32.957459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.921 [2024-12-10 10:36:32.957472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.957853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.957880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.957907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.957933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.957960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.957974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.957995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.922 [2024-12-10 10:36:32.958525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.922 [2024-12-10 10:36:32.958551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.922 [2024-12-10 10:36:32.958566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.923 [2024-12-10 10:36:32.958744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.958974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.958987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.959014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.959046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.959074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.959101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.959127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.923 [2024-12-10 10:36:32.959155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20008f0 is same with the state(6) to be set 00:23:18.923 [2024-12-10 10:36:32.959184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63736 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64168 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64184 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.923 [2024-12-10 10:36:32.959727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.923 [2024-12-10 10:36:32.959737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.923 [2024-12-10 10:36:32.959747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64200 len:8 PRP1 0x0 PRP2 0x0 00:23:18.923 [2024-12-10 10:36:32.959760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.959773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.959782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.959792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64208 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.959804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.959818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.959834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.959845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64216 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.959858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.959871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.959881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.959906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64224 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.959919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.959931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.959941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.959953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64232 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.959981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.959993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64240 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64248 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64256 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64264 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64272 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64280 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64288 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64296 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64304 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.924 [2024-12-10 10:36:32.960403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.924 [2024-12-10 10:36:32.960413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:23:18.924 [2024-12-10 10:36:32.960424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.960519] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20008f0 was disconnected and freed. reset controller. 00:23:18.924 [2024-12-10 10:36:32.961612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.924 [2024-12-10 10:36:32.961700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.924 [2024-12-10 10:36:32.961722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.924 [2024-12-10 10:36:32.961753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcd370 (9): Bad file descriptor 00:23:18.924 [2024-12-10 10:36:32.962112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.924 [2024-12-10 10:36:32.962143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fcd370 with addr=10.0.0.3, port=4421 00:23:18.924 [2024-12-10 10:36:32.962160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fcd370 is same with the state(6) to be set 00:23:18.924 [2024-12-10 10:36:32.962218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcd370 (9): Bad file descriptor 00:23:18.924 [2024-12-10 10:36:32.962252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.924 [2024-12-10 10:36:32.962282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:18.924 [2024-12-10 10:36:32.962297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.924 [2024-12-10 10:36:32.962328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.924 [2024-12-10 10:36:32.962343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.924 7724.92 IOPS, 30.18 MiB/s [2024-12-10T10:36:54.151Z] 7777.70 IOPS, 30.38 MiB/s [2024-12-10T10:36:54.151Z] 7834.55 IOPS, 30.60 MiB/s [2024-12-10T10:36:54.151Z] 7889.67 IOPS, 30.82 MiB/s [2024-12-10T10:36:54.151Z] 7942.02 IOPS, 31.02 MiB/s [2024-12-10T10:36:54.151Z] 7993.20 IOPS, 31.22 MiB/s [2024-12-10T10:36:54.151Z] 8039.93 IOPS, 31.41 MiB/s [2024-12-10T10:36:54.151Z] 8078.53 IOPS, 31.56 MiB/s [2024-12-10T10:36:54.151Z] 8119.11 IOPS, 31.72 MiB/s [2024-12-10T10:36:54.151Z] 8157.71 IOPS, 31.87 MiB/s [2024-12-10T10:36:54.151Z] [2024-12-10 10:36:43.026604] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.924 8195.65 IOPS, 32.01 MiB/s [2024-12-10T10:36:54.151Z] 8233.53 IOPS, 32.16 MiB/s [2024-12-10T10:36:54.151Z] 8267.33 IOPS, 32.29 MiB/s [2024-12-10T10:36:54.151Z] 8301.06 IOPS, 32.43 MiB/s [2024-12-10T10:36:54.151Z] 8325.12 IOPS, 32.52 MiB/s [2024-12-10T10:36:54.151Z] 8354.51 IOPS, 32.63 MiB/s [2024-12-10T10:36:54.151Z] 8384.15 IOPS, 32.75 MiB/s [2024-12-10T10:36:54.151Z] 8410.11 IOPS, 32.85 MiB/s [2024-12-10T10:36:54.151Z] 8436.74 IOPS, 32.96 MiB/s [2024-12-10T10:36:54.151Z] 8462.69 IOPS, 33.06 MiB/s [2024-12-10T10:36:54.151Z] Received shutdown signal, test time was about 55.468984 seconds 00:23:18.924 00:23:18.924 Latency(us) 00:23:18.924 [2024-12-10T10:36:54.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.924 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.924 Verification LBA range: start 0x0 length 0x4000 00:23:18.924 Nvme0n1 : 55.47 8471.30 33.09 0.00 0.00 15080.07 176.87 7046430.72 00:23:18.924 [2024-12-10T10:36:54.151Z] =================================================================================================================== 00:23:18.924 [2024-12-10T10:36:54.151Z] Total : 8471.30 33.09 0.00 0.00 15080.07 176.87 7046430.72 00:23:18.924 [2024-12-10 10:36:53.220969] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.924 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.925 rmmod nvme_tcp 00:23:18.925 rmmod nvme_fabrics 00:23:18.925 rmmod nvme_keyring 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 95975 ']' 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 95975 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95975 ']' 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95975 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95975 00:23:18.925 killing process with pid 95975 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95975' 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95975 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95975 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:18.925 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:18.925 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:18.925 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:18.925 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:18.925 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:18.925 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:18.925 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:23:19.184 00:23:19.184 real 1m1.058s 00:23:19.184 user 2m48.886s 00:23:19.184 sys 0m18.585s 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:19.184 ************************************ 00:23:19.184 END TEST nvmf_host_multipath 00:23:19.184 ************************************ 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.184 ************************************ 00:23:19.184 START TEST nvmf_timeout 00:23:19.184 ************************************ 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:19.184 * Looking for test storage... 00:23:19.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.184 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:19.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.185 --rc genhtml_branch_coverage=1 00:23:19.185 --rc genhtml_function_coverage=1 00:23:19.185 --rc genhtml_legend=1 00:23:19.185 --rc geninfo_all_blocks=1 00:23:19.185 --rc geninfo_unexecuted_blocks=1 00:23:19.185 00:23:19.185 ' 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:19.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.185 --rc genhtml_branch_coverage=1 00:23:19.185 --rc genhtml_function_coverage=1 00:23:19.185 --rc genhtml_legend=1 00:23:19.185 --rc geninfo_all_blocks=1 00:23:19.185 --rc geninfo_unexecuted_blocks=1 00:23:19.185 00:23:19.185 ' 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:19.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.185 --rc genhtml_branch_coverage=1 00:23:19.185 --rc genhtml_function_coverage=1 00:23:19.185 --rc genhtml_legend=1 00:23:19.185 --rc geninfo_all_blocks=1 00:23:19.185 --rc geninfo_unexecuted_blocks=1 00:23:19.185 00:23:19.185 ' 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:19.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.185 --rc genhtml_branch_coverage=1 00:23:19.185 --rc genhtml_function_coverage=1 00:23:19.185 --rc genhtml_legend=1 00:23:19.185 --rc geninfo_all_blocks=1 00:23:19.185 --rc geninfo_unexecuted_blocks=1 00:23:19.185 00:23:19.185 ' 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.185 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.444 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.445 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:19.445 Cannot find device "nvmf_init_br" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:19.445 Cannot find device "nvmf_init_br2" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:19.445 Cannot find device "nvmf_tgt_br" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.445 Cannot find device "nvmf_tgt_br2" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:19.445 Cannot find device "nvmf_init_br" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:19.445 Cannot find device "nvmf_init_br2" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:19.445 Cannot find device "nvmf_tgt_br" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:19.445 Cannot find device "nvmf_tgt_br2" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:19.445 Cannot find device "nvmf_br" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:19.445 Cannot find device "nvmf_init_if" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:19.445 Cannot find device "nvmf_init_if2" 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.445 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:19.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:19.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:19.705 00:23:19.705 --- 10.0.0.3 ping statistics --- 00:23:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.705 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:19.705 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:19.705 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:23:19.705 00:23:19.705 --- 10.0.0.4 ping statistics --- 00:23:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.705 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:19.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:19.705 00:23:19.705 --- 10.0.0.1 ping statistics --- 00:23:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.705 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:19.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:23:19.705 00:23:19.705 --- 10.0.0.2 ping statistics --- 00:23:19.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.705 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:19.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=97191 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 97191 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97191 ']' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.705 10:36:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:19.705 [2024-12-10 10:36:54.902017] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:19.705 [2024-12-10 10:36:54.902078] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.965 [2024-12-10 10:36:55.035871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:19.965 [2024-12-10 10:36:55.066664] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.965 [2024-12-10 10:36:55.066728] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.965 [2024-12-10 10:36:55.066737] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.965 [2024-12-10 10:36:55.066744] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.965 [2024-12-10 10:36:55.066751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.965 [2024-12-10 10:36:55.066895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.965 [2024-12-10 10:36:55.066903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.965 [2024-12-10 10:36:55.093340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:19.965 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.965 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:19.965 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:19.965 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.965 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.223 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.223 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.223 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:20.223 [2024-12-10 10:36:55.404075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.223 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:20.790 Malloc0 00:23:20.790 10:36:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:21.049 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:21.308 [2024-12-10 10:36:56.500611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97234 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97234 /var/tmp/bdevperf.sock 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97234 ']' 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.308 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 [2024-12-10 10:36:56.560275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:21.567 [2024-12-10 10:36:56.560371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97234 ] 00:23:21.567 [2024-12-10 10:36:56.693395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.567 [2024-12-10 10:36:56.726549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.567 [2024-12-10 10:36:56.754283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:21.567 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.567 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:21.567 10:36:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:21.826 10:36:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:22.085 NVMe0n1 00:23:22.086 10:36:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97243 00:23:22.086 10:36:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:22.086 10:36:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.344 Running I/O for 10 seconds... 00:23:23.281 10:36:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:23.543 8213.00 IOPS, 32.08 MiB/s [2024-12-10T10:36:58.770Z] [2024-12-10 10:36:58.527008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.543 [2024-12-10 10:36:58.527053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.543 [2024-12-10 10:36:58.527812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.543 [2024-12-10 10:36:58.527822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.527842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.527947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.527965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.527984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.527994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.544 [2024-12-10 10:36:58.528669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.544 [2024-12-10 10:36:58.528680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.528984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.528993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.545 [2024-12-10 10:36:58.529425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.545 [2024-12-10 10:36:58.529485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.545 [2024-12-10 10:36:58.529509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.545 [2024-12-10 10:36:58.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.545 [2024-12-10 10:36:58.529540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.546 [2024-12-10 10:36:58.529824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.546 [2024-12-10 10:36:58.529843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4de0 is same with the state(6) to be set 00:23:23.546 [2024-12-10 10:36:58.529863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.546 [2024-12-10 10:36:58.529870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.546 [2024-12-10 10:36:58.529878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:23:23.546 [2024-12-10 10:36:58.529886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.529925] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7e4de0 was disconnected and freed. reset controller. 00:23:23.546 [2024-12-10 10:36:58.530001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.546 [2024-12-10 10:36:58.530022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.530032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.546 [2024-12-10 10:36:58.530041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.530050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.546 [2024-12-10 10:36:58.530059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.530068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.546 [2024-12-10 10:36:58.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.546 [2024-12-10 10:36:58.530085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c4500 is same with the state(6) to be set 00:23:23.546 [2024-12-10 10:36:58.530285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:23.546 [2024-12-10 10:36:58.530306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c4500 (9): Bad file descriptor 00:23:23.546 [2024-12-10 10:36:58.530391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.546 [2024-12-10 10:36:58.530458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c4500 with addr=10.0.0.3, port=4420 00:23:23.546 [2024-12-10 10:36:58.530473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c4500 is same with the state(6) to be set 00:23:23.546 [2024-12-10 10:36:58.530492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c4500 (9): Bad file descriptor 00:23:23.546 [2024-12-10 10:36:58.530509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:23.546 [2024-12-10 10:36:58.530518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:23.546 [2024-12-10 10:36:58.530527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:23.546 [2024-12-10 10:36:58.530547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.546 [2024-12-10 10:36:58.530558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:23.546 10:36:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:25.419 4490.00 IOPS, 17.54 MiB/s [2024-12-10T10:37:00.646Z] 2993.33 IOPS, 11.69 MiB/s [2024-12-10T10:37:00.646Z] [2024-12-10 10:37:00.544112] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.419 [2024-12-10 10:37:00.544190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c4500 with addr=10.0.0.3, port=4420 00:23:25.419 [2024-12-10 10:37:00.544211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c4500 is same with the state(6) to be set 00:23:25.419 [2024-12-10 10:37:00.544245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c4500 (9): Bad file descriptor 00:23:25.419 [2024-12-10 10:37:00.544271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:25.419 [2024-12-10 10:37:00.544286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:25.419 [2024-12-10 10:37:00.544303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.420 [2024-12-10 10:37:00.544337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.420 [2024-12-10 10:37:00.544355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:25.420 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:25.420 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.420 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:25.679 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:25.679 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:25.679 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:25.679 10:37:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:26.247 10:37:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:26.247 10:37:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:27.238 2245.00 IOPS, 8.77 MiB/s [2024-12-10T10:37:02.724Z] 1796.00 IOPS, 7.02 MiB/s [2024-12-10T10:37:02.724Z] [2024-12-10 10:37:02.544484] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.497 [2024-12-10 10:37:02.544564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c4500 with addr=10.0.0.3, port=4420 00:23:27.497 [2024-12-10 10:37:02.544580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c4500 is same with the state(6) to be set 00:23:27.497 [2024-12-10 10:37:02.544603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c4500 (9): Bad file descriptor 00:23:27.497 [2024-12-10 10:37:02.544620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:27.497 [2024-12-10 10:37:02.544628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:27.497 [2024-12-10 10:37:02.544638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:27.497 [2024-12-10 10:37:02.544662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.497 [2024-12-10 10:37:02.544672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.369 1496.67 IOPS, 5.85 MiB/s [2024-12-10T10:37:04.596Z] 1282.86 IOPS, 5.01 MiB/s [2024-12-10T10:37:04.596Z] [2024-12-10 10:37:04.544694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.369 [2024-12-10 10:37:04.544749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.369 [2024-12-10 10:37:04.544775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:29.369 [2024-12-10 10:37:04.544784] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:29.369 [2024-12-10 10:37:04.544807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.564 1122.50 IOPS, 4.38 MiB/s 00:23:30.564 Latency(us) 00:23:30.564 [2024-12-10T10:37:05.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.564 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.564 Verification LBA range: start 0x0 length 0x4000 00:23:30.564 NVMe0n1 : 8.10 1107.97 4.33 15.79 0.00 113756.62 3351.27 7015926.69 00:23:30.564 [2024-12-10T10:37:05.791Z] =================================================================================================================== 00:23:30.564 [2024-12-10T10:37:05.791Z] Total : 1107.97 4.33 15.79 0.00 113756.62 3351.27 7015926.69 00:23:30.564 { 00:23:30.564 "results": [ 00:23:30.564 { 00:23:30.564 "job": "NVMe0n1", 00:23:30.564 "core_mask": "0x4", 00:23:30.564 "workload": "verify", 00:23:30.564 "status": "finished", 00:23:30.564 "verify_range": { 00:23:30.564 "start": 0, 00:23:30.564 "length": 16384 00:23:30.564 }, 00:23:30.564 "queue_depth": 128, 00:23:30.564 "io_size": 4096, 00:23:30.564 "runtime": 8.10492, 00:23:30.564 "iops": 1107.968986738919, 00:23:30.564 "mibps": 4.328003854448903, 00:23:30.564 "io_failed": 128, 00:23:30.564 "io_timeout": 0, 00:23:30.564 "avg_latency_us": 113756.6207529844, 00:23:30.564 "min_latency_us": 3351.2727272727275, 00:23:30.564 "max_latency_us": 7015926.69090909 00:23:30.564 } 00:23:30.564 ], 00:23:30.564 "core_count": 1 00:23:30.565 } 00:23:31.131 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:31.131 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.131 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:31.391 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:31.391 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:31.391 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:31.391 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97243 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97234 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97234 ']' 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97234 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97234 00:23:31.650 killing process with pid 97234 00:23:31.650 Received shutdown signal, test time was about 9.325279 seconds 00:23:31.650 00:23:31.650 Latency(us) 00:23:31.650 [2024-12-10T10:37:06.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.650 [2024-12-10T10:37:06.877Z] =================================================================================================================== 00:23:31.650 [2024-12-10T10:37:06.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97234' 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97234 00:23:31.650 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97234 00:23:31.910 10:37:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:31.910 [2024-12-10 10:37:07.087421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:31.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97367 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97367 /var/tmp/bdevperf.sock 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97367 ']' 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.910 10:37:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:32.169 [2024-12-10 10:37:07.154700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:32.169 [2024-12-10 10:37:07.154790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97367 ] 00:23:32.169 [2024-12-10 10:37:07.282121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.169 [2024-12-10 10:37:07.315651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.169 [2024-12-10 10:37:07.344587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:33.106 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.106 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:33.106 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:33.365 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:33.624 NVMe0n1 00:23:33.624 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97392 00:23:33.624 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.624 10:37:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:33.883 Running I/O for 10 seconds... 00:23:34.823 10:37:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:34.823 8085.00 IOPS, 31.58 MiB/s [2024-12-10T10:37:10.050Z] [2024-12-10 10:37:10.012676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.823 [2024-12-10 10:37:10.012726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.012981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.012990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.823 [2024-12-10 10:37:10.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.823 [2024-12-10 10:37:10.013338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.824 [2024-12-10 10:37:10.013984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.824 [2024-12-10 10:37:10.013996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.825 [2024-12-10 10:37:10.014824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.825 [2024-12-10 10:37:10.014835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.014986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.826 [2024-12-10 10:37:10.015513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.826 [2024-12-10 10:37:10.015537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dffde0 is same with the state(6) to be set 00:23:34.826 [2024-12-10 10:37:10.015560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:34.826 [2024-12-10 10:37:10.015568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:34.826 [2024-12-10 10:37:10.015577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:23:34.826 [2024-12-10 10:37:10.015587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015641] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dffde0 was disconnected and freed. reset controller. 00:23:34.826 [2024-12-10 10:37:10.015734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.826 [2024-12-10 10:37:10.015751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.826 [2024-12-10 10:37:10.015772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.826 [2024-12-10 10:37:10.015792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.826 [2024-12-10 10:37:10.015812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.826 [2024-12-10 10:37:10.015822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:34.826 [2024-12-10 10:37:10.016044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.826 [2024-12-10 10:37:10.016074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:34.827 [2024-12-10 10:37:10.016167] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.827 [2024-12-10 10:37:10.016190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddf500 with addr=10.0.0.3, port=4420 00:23:34.827 [2024-12-10 10:37:10.016202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:34.827 [2024-12-10 10:37:10.016234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:34.827 [2024-12-10 10:37:10.016250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.827 [2024-12-10 10:37:10.016266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.827 [2024-12-10 10:37:10.016278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.827 [2024-12-10 10:37:10.016298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.827 [2024-12-10 10:37:10.016309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.827 10:37:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:36.022 4618.00 IOPS, 18.04 MiB/s [2024-12-10T10:37:11.249Z] [2024-12-10 10:37:11.016410] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.022 [2024-12-10 10:37:11.016500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddf500 with addr=10.0.0.3, port=4420 00:23:36.022 [2024-12-10 10:37:11.016518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:36.022 [2024-12-10 10:37:11.016540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:36.022 [2024-12-10 10:37:11.016558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.022 [2024-12-10 10:37:11.016567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.022 [2024-12-10 10:37:11.016593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.022 [2024-12-10 10:37:11.016616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.022 [2024-12-10 10:37:11.016627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.022 10:37:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:36.281 [2024-12-10 10:37:11.301593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:36.281 10:37:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97392 00:23:36.848 3078.67 IOPS, 12.03 MiB/s [2024-12-10T10:37:12.075Z] [2024-12-10 10:37:12.035412] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:38.721 2309.00 IOPS, 9.02 MiB/s [2024-12-10T10:37:14.885Z] 3699.00 IOPS, 14.45 MiB/s [2024-12-10T10:37:16.261Z] 4895.83 IOPS, 19.12 MiB/s [2024-12-10T10:37:17.198Z] 5769.00 IOPS, 22.54 MiB/s [2024-12-10T10:37:18.154Z] 6414.75 IOPS, 25.06 MiB/s [2024-12-10T10:37:19.091Z] 6925.11 IOPS, 27.05 MiB/s [2024-12-10T10:37:19.091Z] 7337.00 IOPS, 28.66 MiB/s 00:23:43.864 Latency(us) 00:23:43.864 [2024-12-10T10:37:19.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.864 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.864 Verification LBA range: start 0x0 length 0x4000 00:23:43.864 NVMe0n1 : 10.01 7341.41 28.68 0.00 0.00 17401.71 1258.59 3019898.88 00:23:43.864 [2024-12-10T10:37:19.091Z] =================================================================================================================== 00:23:43.864 [2024-12-10T10:37:19.091Z] Total : 7341.41 28.68 0.00 0.00 17401.71 1258.59 3019898.88 00:23:43.864 { 00:23:43.864 "results": [ 00:23:43.864 { 00:23:43.864 "job": "NVMe0n1", 00:23:43.864 "core_mask": "0x4", 00:23:43.864 "workload": "verify", 00:23:43.864 "status": "finished", 00:23:43.864 "verify_range": { 00:23:43.864 "start": 0, 00:23:43.864 "length": 16384 00:23:43.864 }, 00:23:43.864 "queue_depth": 128, 00:23:43.864 "io_size": 4096, 00:23:43.864 "runtime": 10.006525, 00:23:43.864 "iops": 7341.4097301510765, 00:23:43.864 "mibps": 28.677381758402642, 00:23:43.864 "io_failed": 0, 00:23:43.864 "io_timeout": 0, 00:23:43.864 "avg_latency_us": 17401.714950710448, 00:23:43.864 "min_latency_us": 1258.5890909090908, 00:23:43.864 "max_latency_us": 3019898.88 00:23:43.864 } 00:23:43.864 ], 00:23:43.864 "core_count": 1 00:23:43.864 } 00:23:43.864 10:37:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97491 00:23:43.864 10:37:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:43.864 10:37:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.864 Running I/O for 10 seconds... 00:23:44.801 10:37:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:45.063 7972.00 IOPS, 31.14 MiB/s [2024-12-10T10:37:20.290Z] [2024-12-10 10:37:20.157798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.157997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.063 [2024-12-10 10:37:20.158113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763a90 is same with the state(6) to be set 00:23:45.064 [2024-12-10 10:37:20.158748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.064 [2024-12-10 10:37:20.158777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.064 [2024-12-10 10:37:20.158813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.064 [2024-12-10 10:37:20.158823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.064 [2024-12-10 10:37:20.158834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.064 [2024-12-10 10:37:20.158844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.064 [2024-12-10 10:37:20.158854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.064 [2024-12-10 10:37:20.158871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.064 [2024-12-10 10:37:20.158882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.064 [2024-12-10 10:37:20.158891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.064 [2024-12-10 10:37:20.158901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.064 [2024-12-10 10:37:20.158910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.158921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.158929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.158940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.158949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.158960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.158979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.158987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.158998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.065 [2024-12-10 10:37:20.159744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.065 [2024-12-10 10:37:20.159755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.159964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.159996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.066 [2024-12-10 10:37:20.160585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.066 [2024-12-10 10:37:20.160594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.160982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.160992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.161001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.161020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.161039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.161059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.067 [2024-12-10 10:37:20.161079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.067 [2024-12-10 10:37:20.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.067 [2024-12-10 10:37:20.161295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.068 [2024-12-10 10:37:20.161314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.068 [2024-12-10 10:37:20.161333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.068 [2024-12-10 10:37:20.161352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.068 [2024-12-10 10:37:20.161372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.068 [2024-12-10 10:37:20.161391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01c20 is same with the state(6) to be set 00:23:45.068 [2024-12-10 10:37:20.161412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.068 [2024-12-10 10:37:20.161419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.068 [2024-12-10 10:37:20.161453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72864 len:8 PRP1 0x0 PRP2 0x0 00:23:45.068 [2024-12-10 10:37:20.161463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.068 [2024-12-10 10:37:20.161504] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e01c20 was disconnected and freed. reset controller. 00:23:45.068 [2024-12-10 10:37:20.161763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.068 [2024-12-10 10:37:20.161846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:45.068 [2024-12-10 10:37:20.161997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.068 [2024-12-10 10:37:20.162019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddf500 with addr=10.0.0.3, port=4420 00:23:45.068 [2024-12-10 10:37:20.162030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:45.068 [2024-12-10 10:37:20.162047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:45.068 [2024-12-10 10:37:20.162062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.068 [2024-12-10 10:37:20.162072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.068 [2024-12-10 10:37:20.162082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.068 [2024-12-10 10:37:20.162101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.068 [2024-12-10 10:37:20.162111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.068 10:37:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:46.004 4498.00 IOPS, 17.57 MiB/s [2024-12-10T10:37:21.231Z] [2024-12-10 10:37:21.162205] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.004 [2024-12-10 10:37:21.162264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddf500 with addr=10.0.0.3, port=4420 00:23:46.004 [2024-12-10 10:37:21.162279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:46.004 [2024-12-10 10:37:21.162299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:46.005 [2024-12-10 10:37:21.162316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.005 [2024-12-10 10:37:21.162330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.005 [2024-12-10 10:37:21.162340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.005 [2024-12-10 10:37:21.162361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.005 [2024-12-10 10:37:21.162372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.940 2998.67 IOPS, 11.71 MiB/s [2024-12-10T10:37:22.167Z] [2024-12-10 10:37:22.162490] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.941 [2024-12-10 10:37:22.162547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddf500 with addr=10.0.0.3, port=4420 00:23:46.941 [2024-12-10 10:37:22.162561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:46.941 [2024-12-10 10:37:22.162582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:46.941 [2024-12-10 10:37:22.162614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.941 [2024-12-10 10:37:22.162623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.941 [2024-12-10 10:37:22.162633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.941 [2024-12-10 10:37:22.162655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.941 [2024-12-10 10:37:22.162666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.136 2249.00 IOPS, 8.79 MiB/s [2024-12-10T10:37:23.363Z] [2024-12-10 10:37:23.165730] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.136 [2024-12-10 10:37:23.165787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddf500 with addr=10.0.0.3, port=4420 00:23:48.136 [2024-12-10 10:37:23.165802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddf500 is same with the state(6) to be set 00:23:48.136 [2024-12-10 10:37:23.166019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf500 (9): Bad file descriptor 00:23:48.136 [2024-12-10 10:37:23.166232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.136 [2024-12-10 10:37:23.166244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.136 [2024-12-10 10:37:23.166253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.136 [2024-12-10 10:37:23.169934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.136 [2024-12-10 10:37:23.170124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.136 10:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:48.395 [2024-12-10 10:37:23.465315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:48.395 10:37:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97491 00:23:49.221 1799.20 IOPS, 7.03 MiB/s [2024-12-10T10:37:24.448Z] [2024-12-10 10:37:24.207227] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:51.130 3021.83 IOPS, 11.80 MiB/s [2024-12-10T10:37:27.305Z] 4135.29 IOPS, 16.15 MiB/s [2024-12-10T10:37:28.241Z] 4965.38 IOPS, 19.40 MiB/s [2024-12-10T10:37:29.178Z] 5623.44 IOPS, 21.97 MiB/s [2024-12-10T10:37:29.178Z] 6149.10 IOPS, 24.02 MiB/s 00:23:53.951 Latency(us) 00:23:53.951 [2024-12-10T10:37:29.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.951 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:53.951 Verification LBA range: start 0x0 length 0x4000 00:23:53.951 NVMe0n1 : 10.01 6156.46 24.05 4208.53 0.00 12321.30 528.76 3019898.88 00:23:53.951 [2024-12-10T10:37:29.178Z] =================================================================================================================== 00:23:53.951 [2024-12-10T10:37:29.178Z] Total : 6156.46 24.05 4208.53 0.00 12321.30 0.00 3019898.88 00:23:53.951 { 00:23:53.951 "results": [ 00:23:53.951 { 00:23:53.951 "job": "NVMe0n1", 00:23:53.951 "core_mask": "0x4", 00:23:53.951 "workload": "verify", 00:23:53.951 "status": "finished", 00:23:53.951 "verify_range": { 00:23:53.951 "start": 0, 00:23:53.951 "length": 16384 00:23:53.951 }, 00:23:53.951 "queue_depth": 128, 00:23:53.951 "io_size": 4096, 00:23:53.951 "runtime": 10.007538, 00:23:53.951 "iops": 6156.459261009051, 00:23:53.951 "mibps": 24.048668988316606, 00:23:53.951 "io_failed": 42117, 00:23:53.951 "io_timeout": 0, 00:23:53.951 "avg_latency_us": 12321.301628595067, 00:23:53.951 "min_latency_us": 528.7563636363636, 00:23:53.951 "max_latency_us": 3019898.88 00:23:53.951 } 00:23:53.951 ], 00:23:53.951 "core_count": 1 00:23:53.951 } 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97367 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97367 ']' 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97367 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97367 00:23:53.951 killing process with pid 97367 00:23:53.951 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.951 00:23:53.951 Latency(us) 00:23:53.951 [2024-12-10T10:37:29.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.951 [2024-12-10T10:37:29.178Z] =================================================================================================================== 00:23:53.951 [2024-12-10T10:37:29.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97367' 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97367 00:23:53.951 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97367 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97611 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97611 /var/tmp/bdevperf.sock 00:23:54.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97611 ']' 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.210 10:37:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:54.210 [2024-12-10 10:37:29.297137] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:54.210 [2024-12-10 10:37:29.297462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97611 ] 00:23:54.210 [2024-12-10 10:37:29.435287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.468 [2024-12-10 10:37:29.469051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.468 [2024-12-10 10:37:29.498124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:55.033 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.033 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:55.033 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97627 00:23:55.033 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:55.034 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97611 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:55.601 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:55.860 NVMe0n1 00:23:55.860 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97663 00:23:55.860 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.860 10:37:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:55.860 Running I/O for 10 seconds... 00:23:56.797 10:37:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:57.060 17272.00 IOPS, 67.47 MiB/s [2024-12-10T10:37:32.287Z] [2024-12-10 10:37:32.077191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.077438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.077594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.077694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.077801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.077961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.060 [2024-12-10 10:37:32.078938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.078993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.061 [2024-12-10 10:37:32.079645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.062 [2024-12-10 10:37:32.079653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761dd0 is same with the state(6) to be set 00:23:57.062 [2024-12-10 10:37:32.079721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.079985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.079996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.062 [2024-12-10 10:37:32.080542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.062 [2024-12-10 10:37:32.080551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.080986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.080996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.063 [2024-12-10 10:37:32.081368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.063 [2024-12-10 10:37:32.081380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.081985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.081996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.082016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.082037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.082058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.082078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.082098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.064 [2024-12-10 10:37:32.082118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.064 [2024-12-10 10:37:32.082127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.065 [2024-12-10 10:37:32.082370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcdb80 is same with the state(6) to be set 00:23:57.065 [2024-12-10 10:37:32.082392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.065 [2024-12-10 10:37:32.082408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.065 [2024-12-10 10:37:32.082417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127296 len:8 PRP1 0x0 PRP2 0x0 00:23:57.065 [2024-12-10 10:37:32.082426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.065 [2024-12-10 10:37:32.082468] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bcdb80 was disconnected and freed. reset controller. 00:23:57.065 [2024-12-10 10:37:32.082738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.065 [2024-12-10 10:37:32.082836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bad500 (9): Bad file descriptor 00:23:57.065 [2024-12-10 10:37:32.082956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.065 [2024-12-10 10:37:32.082977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bad500 with addr=10.0.0.3, port=4420 00:23:57.065 [2024-12-10 10:37:32.082988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad500 is same with the state(6) to be set 00:23:57.065 [2024-12-10 10:37:32.083005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bad500 (9): Bad file descriptor 00:23:57.065 [2024-12-10 10:37:32.083020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.065 [2024-12-10 10:37:32.083030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.065 [2024-12-10 10:37:32.083040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.065 [2024-12-10 10:37:32.083059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.065 [2024-12-10 10:37:32.083069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.065 10:37:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97663 00:23:58.938 9653.00 IOPS, 37.71 MiB/s [2024-12-10T10:37:34.165Z] 6435.33 IOPS, 25.14 MiB/s [2024-12-10T10:37:34.166Z] [2024-12-10 10:37:34.083198] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.939 [2024-12-10 10:37:34.083262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bad500 with addr=10.0.0.3, port=4420 00:23:58.939 [2024-12-10 10:37:34.083277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad500 is same with the state(6) to be set 00:23:58.939 [2024-12-10 10:37:34.083300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bad500 (9): Bad file descriptor 00:23:58.939 [2024-12-10 10:37:34.083328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.939 [2024-12-10 10:37:34.083339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.939 [2024-12-10 10:37:34.083349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.939 [2024-12-10 10:37:34.083372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.939 [2024-12-10 10:37:34.083382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.810 4826.50 IOPS, 18.85 MiB/s [2024-12-10T10:37:36.296Z] 3861.20 IOPS, 15.08 MiB/s [2024-12-10T10:37:36.296Z] [2024-12-10 10:37:36.083526] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.069 [2024-12-10 10:37:36.083589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bad500 with addr=10.0.0.3, port=4420 00:24:01.069 [2024-12-10 10:37:36.083629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad500 is same with the state(6) to be set 00:24:01.069 [2024-12-10 10:37:36.083653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bad500 (9): Bad file descriptor 00:24:01.069 [2024-12-10 10:37:36.083671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.069 [2024-12-10 10:37:36.083681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.069 [2024-12-10 10:37:36.083692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.069 [2024-12-10 10:37:36.083715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.069 [2024-12-10 10:37:36.083725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.943 3217.67 IOPS, 12.57 MiB/s [2024-12-10T10:37:38.170Z] 2758.00 IOPS, 10.77 MiB/s [2024-12-10T10:37:38.170Z] [2024-12-10 10:37:38.083807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.943 [2024-12-10 10:37:38.084029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.943 [2024-12-10 10:37:38.084066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.943 [2024-12-10 10:37:38.084077] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:02.943 [2024-12-10 10:37:38.084105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.880 2413.25 IOPS, 9.43 MiB/s 00:24:03.880 Latency(us) 00:24:03.880 [2024-12-10T10:37:39.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.880 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:03.880 NVMe0n1 : 8.13 2373.85 9.27 15.74 0.00 53475.89 6881.28 7015926.69 00:24:03.880 [2024-12-10T10:37:39.107Z] =================================================================================================================== 00:24:03.880 [2024-12-10T10:37:39.107Z] Total : 2373.85 9.27 15.74 0.00 53475.89 6881.28 7015926.69 00:24:03.880 { 00:24:03.880 "results": [ 00:24:03.880 { 00:24:03.880 "job": "NVMe0n1", 00:24:03.880 "core_mask": "0x4", 00:24:03.880 "workload": "randread", 00:24:03.880 "status": "finished", 00:24:03.880 "queue_depth": 128, 00:24:03.880 "io_size": 4096, 00:24:03.880 "runtime": 8.132768, 00:24:03.880 "iops": 2373.853526868097, 00:24:03.880 "mibps": 9.272865339328504, 00:24:03.880 "io_failed": 128, 00:24:03.880 "io_timeout": 0, 00:24:03.880 "avg_latency_us": 53475.89252350614, 00:24:03.880 "min_latency_us": 6881.28, 00:24:03.880 "max_latency_us": 7015926.69090909 00:24:03.880 } 00:24:03.880 ], 00:24:03.880 "core_count": 1 00:24:03.880 } 00:24:03.880 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.140 Attaching 5 probes... 00:24:04.140 1307.370747: reset bdev controller NVMe0 00:24:04.140 1307.520931: reconnect bdev controller NVMe0 00:24:04.140 3307.731801: reconnect delay bdev controller NVMe0 00:24:04.140 3307.747745: reconnect bdev controller NVMe0 00:24:04.140 5308.048821: reconnect delay bdev controller NVMe0 00:24:04.140 5308.079387: reconnect bdev controller NVMe0 00:24:04.140 7308.429989: reconnect delay bdev controller NVMe0 00:24:04.140 7308.446279: reconnect bdev controller NVMe0 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97627 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97611 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97611 ']' 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97611 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97611 00:24:04.140 killing process with pid 97611 00:24:04.140 Received shutdown signal, test time was about 8.201977 seconds 00:24:04.140 00:24:04.140 Latency(us) 00:24:04.140 [2024-12-10T10:37:39.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.140 [2024-12-10T10:37:39.367Z] =================================================================================================================== 00:24:04.140 [2024-12-10T10:37:39.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97611' 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97611 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97611 00:24:04.140 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.399 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.399 rmmod nvme_tcp 00:24:04.399 rmmod nvme_fabrics 00:24:04.659 rmmod nvme_keyring 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 97191 ']' 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 97191 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97191 ']' 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97191 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97191 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.659 killing process with pid 97191 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97191' 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97191 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97191 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.659 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.918 10:37:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:04.918 ************************************ 00:24:04.918 END TEST nvmf_timeout 00:24:04.918 ************************************ 00:24:04.918 00:24:04.918 real 0m45.869s 00:24:04.918 user 2m15.089s 00:24:04.918 sys 0m5.144s 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:04.918 ************************************ 00:24:04.918 END TEST nvmf_host 00:24:04.918 ************************************ 00:24:04.918 00:24:04.918 real 5m41.598s 00:24:04.918 user 16m2.849s 00:24:04.918 sys 1m15.897s 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.918 10:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.177 10:37:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:05.177 10:37:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:05.177 ************************************ 00:24:05.177 END TEST nvmf_tcp 00:24:05.177 ************************************ 00:24:05.177 00:24:05.177 real 15m1.971s 00:24:05.177 user 39m34.121s 00:24:05.177 sys 4m2.614s 00:24:05.177 10:37:40 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:05.177 10:37:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.177 10:37:40 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:24:05.177 10:37:40 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:05.177 10:37:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:05.177 10:37:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:05.177 10:37:40 -- common/autotest_common.sh@10 -- # set +x 00:24:05.177 ************************************ 00:24:05.177 START TEST nvmf_dif 00:24:05.177 ************************************ 00:24:05.177 10:37:40 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:05.177 * Looking for test storage... 00:24:05.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:05.177 10:37:40 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:05.177 10:37:40 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:24:05.177 10:37:40 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:05.437 10:37:40 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.437 10:37:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:05.437 10:37:40 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.437 10:37:40 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:05.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.437 --rc genhtml_branch_coverage=1 00:24:05.437 --rc genhtml_function_coverage=1 00:24:05.437 --rc genhtml_legend=1 00:24:05.437 --rc geninfo_all_blocks=1 00:24:05.437 --rc geninfo_unexecuted_blocks=1 00:24:05.437 00:24:05.437 ' 00:24:05.437 10:37:40 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:05.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.437 --rc genhtml_branch_coverage=1 00:24:05.437 --rc genhtml_function_coverage=1 00:24:05.437 --rc genhtml_legend=1 00:24:05.437 --rc geninfo_all_blocks=1 00:24:05.437 --rc geninfo_unexecuted_blocks=1 00:24:05.437 00:24:05.437 ' 00:24:05.437 10:37:40 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:05.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.437 --rc genhtml_branch_coverage=1 00:24:05.437 --rc genhtml_function_coverage=1 00:24:05.437 --rc genhtml_legend=1 00:24:05.437 --rc geninfo_all_blocks=1 00:24:05.437 --rc geninfo_unexecuted_blocks=1 00:24:05.437 00:24:05.437 ' 00:24:05.437 10:37:40 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:05.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.437 --rc genhtml_branch_coverage=1 00:24:05.437 --rc genhtml_function_coverage=1 00:24:05.437 --rc genhtml_legend=1 00:24:05.437 --rc geninfo_all_blocks=1 00:24:05.437 --rc geninfo_unexecuted_blocks=1 00:24:05.437 00:24:05.437 ' 00:24:05.437 10:37:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.437 10:37:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.438 10:37:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.438 10:37:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.438 10:37:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.438 10:37:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.438 10:37:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.438 10:37:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.438 10:37:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.438 10:37:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:05.438 10:37:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.438 10:37:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:05.438 10:37:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:05.438 10:37:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:05.438 10:37:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:05.438 10:37:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.438 10:37:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:05.438 10:37:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.438 Cannot find device "nvmf_init_br" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.438 Cannot find device "nvmf_init_br2" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.438 Cannot find device "nvmf_tgt_br" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.438 Cannot find device "nvmf_tgt_br2" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.438 Cannot find device "nvmf_init_br" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.438 Cannot find device "nvmf_init_br2" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.438 Cannot find device "nvmf_tgt_br" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.438 Cannot find device "nvmf_tgt_br2" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.438 Cannot find device "nvmf_br" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.438 Cannot find device "nvmf_init_if" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.438 Cannot find device "nvmf_init_if2" 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.438 10:37:40 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:24:05.698 00:24:05.698 --- 10.0.0.3 ping statistics --- 00:24:05.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.698 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.698 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.698 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:24:05.698 00:24:05.698 --- 10.0.0.4 ping statistics --- 00:24:05.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.698 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:05.698 00:24:05.698 --- 10.0.0.1 ping statistics --- 00:24:05.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.698 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:24:05.698 00:24:05.698 --- 10.0.0.2 ping statistics --- 00:24:05.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.698 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:05.698 10:37:40 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:05.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:06.216 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:06.216 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:06.216 10:37:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:06.216 10:37:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=98155 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 98155 00:24:06.216 10:37:41 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 98155 ']' 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.216 10:37:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.216 [2024-12-10 10:37:41.342443] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:06.216 [2024-12-10 10:37:41.342539] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.475 [2024-12-10 10:37:41.482867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.475 [2024-12-10 10:37:41.525192] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.475 [2024-12-10 10:37:41.525253] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.475 [2024-12-10 10:37:41.525268] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.475 [2024-12-10 10:37:41.525278] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.475 [2024-12-10 10:37:41.525286] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.475 [2024-12-10 10:37:41.525318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.475 [2024-12-10 10:37:41.560320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:24:07.413 10:37:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 10:37:42 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.413 10:37:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:07.413 10:37:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 [2024-12-10 10:37:42.336454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.413 10:37:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 ************************************ 00:24:07.413 START TEST fio_dif_1_default 00:24:07.413 ************************************ 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 bdev_null0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:07.413 [2024-12-10 10:37:42.380567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:07.413 { 00:24:07.413 "params": { 00:24:07.413 "name": "Nvme$subsystem", 00:24:07.413 "trtype": "$TEST_TRANSPORT", 00:24:07.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.413 "adrfam": "ipv4", 00:24:07.413 "trsvcid": "$NVMF_PORT", 00:24:07.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.413 "hdgst": ${hdgst:-false}, 00:24:07.413 "ddgst": ${ddgst:-false} 00:24:07.413 }, 00:24:07.413 "method": "bdev_nvme_attach_controller" 00:24:07.413 } 00:24:07.413 EOF 00:24:07.413 )") 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:07.413 "params": { 00:24:07.413 "name": "Nvme0", 00:24:07.413 "trtype": "tcp", 00:24:07.413 "traddr": "10.0.0.3", 00:24:07.413 "adrfam": "ipv4", 00:24:07.413 "trsvcid": "4420", 00:24:07.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:07.413 "hdgst": false, 00:24:07.413 "ddgst": false 00:24:07.413 }, 00:24:07.413 "method": "bdev_nvme_attach_controller" 00:24:07.413 }' 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:07.413 10:37:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:07.413 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:07.413 fio-3.35 00:24:07.413 Starting 1 thread 00:24:19.620 00:24:19.620 filename0: (groupid=0, jobs=1): err= 0: pid=98222: Tue Dec 10 10:37:53 2024 00:24:19.620 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(394MiB/10001msec) 00:24:19.620 slat (nsec): min=5851, max=77713, avg=7535.02, stdev=3213.62 00:24:19.620 clat (usec): min=313, max=3777, avg=374.10, stdev=43.50 00:24:19.620 lat (usec): min=319, max=3805, avg=381.64, stdev=44.21 00:24:19.620 clat percentiles (usec): 00:24:19.620 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:24:19.620 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:24:19.620 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 445], 00:24:19.620 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 562], 99.95th=[ 586], 00:24:19.620 | 99.99th=[ 766] 00:24:19.620 bw ( KiB/s): min=39057, max=41664, per=100.00%, avg=40354.58, stdev=743.75, samples=19 00:24:19.620 iops : min= 9764, max=10416, avg=10088.63, stdev=185.96, samples=19 00:24:19.620 lat (usec) : 500=99.06%, 750=0.93%, 1000=0.01% 00:24:19.620 lat (msec) : 2=0.01%, 4=0.01% 00:24:19.620 cpu : usr=84.62%, sys=13.45%, ctx=33, majf=0, minf=0 00:24:19.620 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:19.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.620 issued rwts: total=100844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.620 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:19.620 00:24:19.620 Run status group 0 (all jobs): 00:24:19.620 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=394MiB (413MB), run=10001-10001msec 00:24:19.620 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:19.620 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:19.620 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 00:24:19.621 real 0m10.866s 00:24:19.621 user 0m9.009s 00:24:19.621 sys 0m1.571s 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 ************************************ 00:24:19.621 END TEST fio_dif_1_default 00:24:19.621 ************************************ 00:24:19.621 10:37:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:19.621 10:37:53 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:19.621 10:37:53 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 ************************************ 00:24:19.621 START TEST fio_dif_1_multi_subsystems 00:24:19.621 ************************************ 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 bdev_null0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 [2024-12-10 10:37:53.305517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 bdev_null1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:19.621 { 00:24:19.621 "params": { 00:24:19.621 "name": "Nvme$subsystem", 00:24:19.621 "trtype": "$TEST_TRANSPORT", 00:24:19.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:19.621 "adrfam": "ipv4", 00:24:19.621 "trsvcid": "$NVMF_PORT", 00:24:19.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:19.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:19.621 "hdgst": ${hdgst:-false}, 00:24:19.621 "ddgst": ${ddgst:-false} 00:24:19.621 }, 00:24:19.621 "method": "bdev_nvme_attach_controller" 00:24:19.621 } 00:24:19.621 EOF 00:24:19.621 )") 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:19.621 { 00:24:19.621 "params": { 00:24:19.621 "name": "Nvme$subsystem", 00:24:19.621 "trtype": "$TEST_TRANSPORT", 00:24:19.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:19.621 "adrfam": "ipv4", 00:24:19.621 "trsvcid": "$NVMF_PORT", 00:24:19.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:19.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:19.621 "hdgst": ${hdgst:-false}, 00:24:19.621 "ddgst": ${ddgst:-false} 00:24:19.621 }, 00:24:19.621 "method": "bdev_nvme_attach_controller" 00:24:19.621 } 00:24:19.621 EOF 00:24:19.621 )") 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:24:19.621 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:19.621 "params": { 00:24:19.621 "name": "Nvme0", 00:24:19.621 "trtype": "tcp", 00:24:19.621 "traddr": "10.0.0.3", 00:24:19.621 "adrfam": "ipv4", 00:24:19.621 "trsvcid": "4420", 00:24:19.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:19.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:19.622 "hdgst": false, 00:24:19.622 "ddgst": false 00:24:19.622 }, 00:24:19.622 "method": "bdev_nvme_attach_controller" 00:24:19.622 },{ 00:24:19.622 "params": { 00:24:19.622 "name": "Nvme1", 00:24:19.622 "trtype": "tcp", 00:24:19.622 "traddr": "10.0.0.3", 00:24:19.622 "adrfam": "ipv4", 00:24:19.622 "trsvcid": "4420", 00:24:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.622 "hdgst": false, 00:24:19.622 "ddgst": false 00:24:19.622 }, 00:24:19.622 "method": "bdev_nvme_attach_controller" 00:24:19.622 }' 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:19.622 10:37:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:19.622 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:19.622 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:19.622 fio-3.35 00:24:19.622 Starting 2 threads 00:24:29.634 00:24:29.634 filename0: (groupid=0, jobs=1): err= 0: pid=98382: Tue Dec 10 10:38:04 2024 00:24:29.634 read: IOPS=5380, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:24:29.634 slat (nsec): min=6278, max=66580, avg=12785.77, stdev=4200.57 00:24:29.634 clat (usec): min=445, max=1242, avg=708.85, stdev=53.91 00:24:29.634 lat (usec): min=456, max=1267, avg=721.63, stdev=54.88 00:24:29.634 clat percentiles (usec): 00:24:29.634 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 668], 00:24:29.634 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:24:29.634 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 807], 00:24:29.634 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 1020], 00:24:29.634 | 99.99th=[ 1074] 00:24:29.634 bw ( KiB/s): min=21088, max=22048, per=50.01%, avg=21527.58, stdev=283.37, samples=19 00:24:29.634 iops : min= 5272, max= 5512, avg=5381.79, stdev=70.85, samples=19 00:24:29.634 lat (usec) : 500=0.01%, 750=83.80%, 1000=16.12% 00:24:29.634 lat (msec) : 2=0.07% 00:24:29.634 cpu : usr=90.38%, sys=8.25%, ctx=8, majf=0, minf=0 00:24:29.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:29.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.634 issued rwts: total=53813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:29.634 filename1: (groupid=0, jobs=1): err= 0: pid=98383: Tue Dec 10 10:38:04 2024 00:24:29.634 read: IOPS=5381, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:24:29.634 slat (nsec): min=6215, max=74771, avg=12978.37, stdev=4281.93 00:24:29.634 clat (usec): min=435, max=1127, avg=707.70, stdev=48.44 00:24:29.634 lat (usec): min=442, max=1153, avg=720.67, stdev=49.13 00:24:29.634 clat percentiles (usec): 00:24:29.634 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 668], 00:24:29.634 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 709], 00:24:29.634 | 70.00th=[ 717], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:24:29.634 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 988], 99.95th=[ 1020], 00:24:29.634 | 99.99th=[ 1090] 00:24:29.634 bw ( KiB/s): min=21088, max=22048, per=50.01%, avg=21527.58, stdev=283.37, samples=19 00:24:29.634 iops : min= 5272, max= 5512, avg=5381.79, stdev=70.85, samples=19 00:24:29.634 lat (usec) : 500=0.01%, 750=86.10%, 1000=13.81% 00:24:29.634 lat (msec) : 2=0.07% 00:24:29.634 cpu : usr=90.39%, sys=8.25%, ctx=30, majf=0, minf=0 00:24:29.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:29.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.634 issued rwts: total=53816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:29.634 00:24:29.634 Run status group 0 (all jobs): 00:24:29.634 READ: bw=42.0MiB/s (44.1MB/s), 21.0MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=420MiB (441MB), run=10001-10001msec 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:29.634 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:29.635 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 ************************************ 00:24:29.635 END TEST fio_dif_1_multi_subsystems 00:24:29.635 ************************************ 00:24:29.635 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.635 00:24:29.635 real 0m10.999s 00:24:29.635 user 0m18.746s 00:24:29.635 sys 0m1.912s 00:24:29.635 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 10:38:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:29.635 10:38:04 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:29.635 10:38:04 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 ************************************ 00:24:29.635 START TEST fio_dif_rand_params 00:24:29.635 ************************************ 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 bdev_null0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 [2024-12-10 10:38:04.361914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:29.635 { 00:24:29.635 "params": { 00:24:29.635 "name": "Nvme$subsystem", 00:24:29.635 "trtype": "$TEST_TRANSPORT", 00:24:29.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.635 "adrfam": "ipv4", 00:24:29.635 "trsvcid": "$NVMF_PORT", 00:24:29.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.635 "hdgst": ${hdgst:-false}, 00:24:29.635 "ddgst": ${ddgst:-false} 00:24:29.635 }, 00:24:29.635 "method": "bdev_nvme_attach_controller" 00:24:29.635 } 00:24:29.635 EOF 00:24:29.635 )") 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:29.635 "params": { 00:24:29.635 "name": "Nvme0", 00:24:29.635 "trtype": "tcp", 00:24:29.635 "traddr": "10.0.0.3", 00:24:29.635 "adrfam": "ipv4", 00:24:29.635 "trsvcid": "4420", 00:24:29.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:29.635 "hdgst": false, 00:24:29.635 "ddgst": false 00:24:29.635 }, 00:24:29.635 "method": "bdev_nvme_attach_controller" 00:24:29.635 }' 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:29.635 10:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:29.635 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:29.635 ... 00:24:29.635 fio-3.35 00:24:29.635 Starting 3 threads 00:24:34.902 00:24:34.903 filename0: (groupid=0, jobs=1): err= 0: pid=98533: Tue Dec 10 10:38:10 2024 00:24:34.903 read: IOPS=286, BW=35.8MiB/s (37.6MB/s)(179MiB/5001msec) 00:24:34.903 slat (nsec): min=6795, max=52209, avg=14789.48, stdev=4417.37 00:24:34.903 clat (usec): min=10016, max=12034, avg=10429.09, stdev=374.47 00:24:34.903 lat (usec): min=10029, max=12048, avg=10443.88, stdev=374.85 00:24:34.903 clat percentiles (usec): 00:24:34.903 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:24:34.903 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:24:34.903 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:24:34.903 | 99.00th=[11863], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:24:34.903 | 99.99th=[11994] 00:24:34.903 bw ( KiB/s): min=36096, max=37632, per=33.34%, avg=36693.33, stdev=640.00, samples=9 00:24:34.903 iops : min= 282, max= 294, avg=286.67, stdev= 5.00, samples=9 00:24:34.903 lat (msec) : 20=100.00% 00:24:34.903 cpu : usr=89.94%, sys=9.50%, ctx=6, majf=0, minf=0 00:24:34.903 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.903 issued rwts: total=1434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.903 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.903 filename0: (groupid=0, jobs=1): err= 0: pid=98534: Tue Dec 10 10:38:10 2024 00:24:34.903 read: IOPS=286, BW=35.8MiB/s (37.6MB/s)(179MiB/5001msec) 00:24:34.903 slat (nsec): min=6650, max=69538, avg=14930.29, stdev=4466.50 00:24:34.903 clat (usec): min=9970, max=12090, avg=10428.03, stdev=373.51 00:24:34.903 lat (usec): min=9992, max=12101, avg=10442.96, stdev=373.88 00:24:34.903 clat percentiles (usec): 00:24:34.903 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:24:34.903 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:24:34.903 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:24:34.903 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:24:34.903 | 99.99th=[12125] 00:24:34.903 bw ( KiB/s): min=36096, max=37632, per=33.34%, avg=36693.33, stdev=640.00, samples=9 00:24:34.903 iops : min= 282, max= 294, avg=286.67, stdev= 5.00, samples=9 00:24:34.903 lat (msec) : 10=0.14%, 20=99.86% 00:24:34.903 cpu : usr=90.40%, sys=9.02%, ctx=29, majf=0, minf=0 00:24:34.903 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.903 issued rwts: total=1434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.903 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.903 filename0: (groupid=0, jobs=1): err= 0: pid=98535: Tue Dec 10 10:38:10 2024 00:24:34.903 read: IOPS=286, BW=35.9MiB/s (37.6MB/s)(180MiB/5007msec) 00:24:34.903 slat (nsec): min=6670, max=44492, avg=13735.20, stdev=4903.47 00:24:34.903 clat (usec): min=5655, max=12044, avg=10420.94, stdev=431.93 00:24:34.903 lat (usec): min=5662, max=12056, avg=10434.67, stdev=432.26 00:24:34.903 clat percentiles (usec): 00:24:34.903 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10159], 00:24:34.903 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:24:34.903 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:24:34.903 | 99.00th=[11863], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:24:34.903 | 99.99th=[11994] 00:24:34.903 bw ( KiB/s): min=36096, max=36864, per=33.36%, avg=36710.40, stdev=323.82, samples=10 00:24:34.903 iops : min= 282, max= 288, avg=286.80, stdev= 2.53, samples=10 00:24:34.903 lat (msec) : 10=0.21%, 20=99.79% 00:24:34.903 cpu : usr=90.77%, sys=8.67%, ctx=9, majf=0, minf=0 00:24:34.903 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.903 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.903 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.903 00:24:34.903 Run status group 0 (all jobs): 00:24:34.903 READ: bw=107MiB/s (113MB/s), 35.8MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=538MiB (564MB), run=5001-5007msec 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 bdev_null0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 [2024-12-10 10:38:10.245048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 bdev_null1 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:35.163 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.164 bdev_null2 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:35.164 { 00:24:35.164 "params": { 00:24:35.164 "name": "Nvme$subsystem", 00:24:35.164 "trtype": "$TEST_TRANSPORT", 00:24:35.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.164 "adrfam": "ipv4", 00:24:35.164 "trsvcid": "$NVMF_PORT", 00:24:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.164 "hdgst": ${hdgst:-false}, 00:24:35.164 "ddgst": ${ddgst:-false} 00:24:35.164 }, 00:24:35.164 "method": "bdev_nvme_attach_controller" 00:24:35.164 } 00:24:35.164 EOF 00:24:35.164 )") 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:35.164 { 00:24:35.164 "params": { 00:24:35.164 "name": "Nvme$subsystem", 00:24:35.164 "trtype": "$TEST_TRANSPORT", 00:24:35.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.164 "adrfam": "ipv4", 00:24:35.164 "trsvcid": "$NVMF_PORT", 00:24:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.164 "hdgst": ${hdgst:-false}, 00:24:35.164 "ddgst": ${ddgst:-false} 00:24:35.164 }, 00:24:35.164 "method": "bdev_nvme_attach_controller" 00:24:35.164 } 00:24:35.164 EOF 00:24:35.164 )") 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:35.164 { 00:24:35.164 "params": { 00:24:35.164 "name": "Nvme$subsystem", 00:24:35.164 "trtype": "$TEST_TRANSPORT", 00:24:35.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.164 "adrfam": "ipv4", 00:24:35.164 "trsvcid": "$NVMF_PORT", 00:24:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.164 "hdgst": ${hdgst:-false}, 00:24:35.164 "ddgst": ${ddgst:-false} 00:24:35.164 }, 00:24:35.164 "method": "bdev_nvme_attach_controller" 00:24:35.164 } 00:24:35.164 EOF 00:24:35.164 )") 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:35.164 "params": { 00:24:35.164 "name": "Nvme0", 00:24:35.164 "trtype": "tcp", 00:24:35.164 "traddr": "10.0.0.3", 00:24:35.164 "adrfam": "ipv4", 00:24:35.164 "trsvcid": "4420", 00:24:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:35.164 "hdgst": false, 00:24:35.164 "ddgst": false 00:24:35.164 }, 00:24:35.164 "method": "bdev_nvme_attach_controller" 00:24:35.164 },{ 00:24:35.164 "params": { 00:24:35.164 "name": "Nvme1", 00:24:35.164 "trtype": "tcp", 00:24:35.164 "traddr": "10.0.0.3", 00:24:35.164 "adrfam": "ipv4", 00:24:35.164 "trsvcid": "4420", 00:24:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.164 "hdgst": false, 00:24:35.164 "ddgst": false 00:24:35.164 }, 00:24:35.164 "method": "bdev_nvme_attach_controller" 00:24:35.164 },{ 00:24:35.164 "params": { 00:24:35.164 "name": "Nvme2", 00:24:35.164 "trtype": "tcp", 00:24:35.164 "traddr": "10.0.0.3", 00:24:35.164 "adrfam": "ipv4", 00:24:35.164 "trsvcid": "4420", 00:24:35.164 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:35.164 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:35.164 "hdgst": false, 00:24:35.164 "ddgst": false 00:24:35.164 }, 00:24:35.164 "method": "bdev_nvme_attach_controller" 00:24:35.164 }' 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:35.164 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:35.424 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:35.424 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:35.424 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:35.424 10:38:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:35.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:35.424 ... 00:24:35.424 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:35.424 ... 00:24:35.424 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:35.424 ... 00:24:35.424 fio-3.35 00:24:35.424 Starting 24 threads 00:24:47.630 00:24:47.630 filename0: (groupid=0, jobs=1): err= 0: pid=98630: Tue Dec 10 10:38:21 2024 00:24:47.630 read: IOPS=190, BW=761KiB/s (779kB/s)(7644KiB/10050msec) 00:24:47.630 slat (usec): min=3, max=6033, avg=18.20, stdev=157.81 00:24:47.630 clat (msec): min=5, max=156, avg=84.00, stdev=26.68 00:24:47.630 lat (msec): min=5, max=156, avg=84.01, stdev=26.68 00:24:47.630 clat percentiles (msec): 00:24:47.630 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 67], 00:24:47.630 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 94], 00:24:47.630 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 121], 00:24:47.630 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:24:47.630 | 99.99th=[ 157] 00:24:47.630 bw ( KiB/s): min= 528, max= 1136, per=4.16%, avg=757.90, stdev=174.29, samples=20 00:24:47.630 iops : min= 132, max= 284, avg=189.45, stdev=43.58, samples=20 00:24:47.630 lat (msec) : 10=2.51%, 20=0.10%, 50=7.27%, 100=57.56%, 250=32.55% 00:24:47.630 cpu : usr=36.80%, sys=2.22%, ctx=1136, majf=0, minf=9 00:24:47.630 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:47.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.630 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.630 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.630 filename0: (groupid=0, jobs=1): err= 0: pid=98631: Tue Dec 10 10:38:21 2024 00:24:47.630 read: IOPS=184, BW=739KiB/s (757kB/s)(7420KiB/10039msec) 00:24:47.630 slat (usec): min=6, max=8024, avg=25.86, stdev=246.02 00:24:47.630 clat (msec): min=29, max=159, avg=86.39, stdev=22.61 00:24:47.630 lat (msec): min=29, max=159, avg=86.42, stdev=22.62 00:24:47.630 clat percentiles (msec): 00:24:47.630 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 71], 00:24:47.630 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 96], 00:24:47.630 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 114], 95.00th=[ 121], 00:24:47.630 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 161], 99.95th=[ 161], 00:24:47.630 | 99.99th=[ 161] 00:24:47.630 bw ( KiB/s): min= 528, max= 992, per=4.04%, avg=735.50, stdev=127.68, samples=20 00:24:47.630 iops : min= 132, max= 248, avg=183.85, stdev=31.92, samples=20 00:24:47.630 lat (msec) : 50=8.68%, 100=55.90%, 250=35.42% 00:24:47.630 cpu : usr=37.23%, sys=2.39%, ctx=1177, majf=0, minf=9 00:24:47.630 IO depths : 1=0.1%, 2=1.9%, 4=7.3%, 8=75.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:47.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.630 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.630 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.630 filename0: (groupid=0, jobs=1): err= 0: pid=98632: Tue Dec 10 10:38:21 2024 00:24:47.630 read: IOPS=195, BW=783KiB/s (802kB/s)(7848KiB/10019msec) 00:24:47.630 slat (usec): min=4, max=8023, avg=20.99, stdev=202.22 00:24:47.630 clat (msec): min=31, max=154, avg=81.56, stdev=24.30 00:24:47.630 lat (msec): min=31, max=154, avg=81.59, stdev=24.29 00:24:47.630 clat percentiles (msec): 00:24:47.630 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:24:47.630 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:24:47.630 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 121], 00:24:47.630 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:24:47.630 | 99.99th=[ 155] 00:24:47.630 bw ( KiB/s): min= 456, max= 1080, per=4.28%, avg=778.30, stdev=162.22, samples=20 00:24:47.630 iops : min= 114, max= 270, avg=194.55, stdev=40.57, samples=20 00:24:47.630 lat (msec) : 50=12.49%, 100=59.02%, 250=28.49% 00:24:47.630 cpu : usr=37.30%, sys=2.43%, ctx=1172, majf=0, minf=9 00:24:47.630 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:47.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.630 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.630 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.630 filename0: (groupid=0, jobs=1): err= 0: pid=98633: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=198, BW=793KiB/s (812kB/s)(7940KiB/10011msec) 00:24:47.631 slat (usec): min=3, max=4026, avg=17.61, stdev=90.17 00:24:47.631 clat (msec): min=15, max=236, avg=80.59, stdev=25.97 00:24:47.631 lat (msec): min=15, max=237, avg=80.60, stdev=25.97 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:47.631 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:24:47.631 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 120], 00:24:47.631 | 99.00th=[ 138], 99.50th=[ 203], 99.90th=[ 236], 99.95th=[ 239], 00:24:47.631 | 99.99th=[ 239] 00:24:47.631 bw ( KiB/s): min= 384, max= 1024, per=4.26%, avg=775.05, stdev=158.18, samples=19 00:24:47.631 iops : min= 96, max= 256, avg=193.74, stdev=39.56, samples=19 00:24:47.631 lat (msec) : 20=0.30%, 50=13.70%, 100=60.25%, 250=25.74% 00:24:47.631 cpu : usr=39.39%, sys=2.54%, ctx=1205, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename0: (groupid=0, jobs=1): err= 0: pid=98634: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=188, BW=755KiB/s (773kB/s)(7584KiB/10047msec) 00:24:47.631 slat (usec): min=8, max=8030, avg=26.78, stdev=318.57 00:24:47.631 clat (msec): min=11, max=155, avg=84.60, stdev=23.80 00:24:47.631 lat (msec): min=11, max=155, avg=84.63, stdev=23.80 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 23], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 71], 00:24:47.631 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 95], 00:24:47.631 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 121], 00:24:47.631 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 157], 00:24:47.631 | 99.99th=[ 157] 00:24:47.631 bw ( KiB/s): min= 528, max= 992, per=4.13%, avg=752.00, stdev=138.61, samples=20 00:24:47.631 iops : min= 132, max= 248, avg=188.00, stdev=34.65, samples=20 00:24:47.631 lat (msec) : 20=0.84%, 50=9.18%, 100=58.91%, 250=31.07% 00:24:47.631 cpu : usr=32.11%, sys=2.00%, ctx=881, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=79.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename0: (groupid=0, jobs=1): err= 0: pid=98635: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=191, BW=766KiB/s (784kB/s)(7660KiB/10006msec) 00:24:47.631 slat (usec): min=4, max=8034, avg=24.90, stdev=229.05 00:24:47.631 clat (msec): min=6, max=231, avg=83.44, stdev=25.32 00:24:47.631 lat (msec): min=6, max=231, avg=83.47, stdev=25.31 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 65], 00:24:47.631 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 88], 00:24:47.631 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 118], 00:24:47.631 | 99.00th=[ 132], 99.50th=[ 194], 99.90th=[ 232], 99.95th=[ 232], 00:24:47.631 | 99.99th=[ 232] 00:24:47.631 bw ( KiB/s): min= 496, max= 1080, per=4.09%, avg=743.89, stdev=157.37, samples=19 00:24:47.631 iops : min= 124, max= 270, avg=185.95, stdev=39.32, samples=19 00:24:47.631 lat (msec) : 10=0.16%, 20=0.68%, 50=9.77%, 100=59.84%, 250=29.56% 00:24:47.631 cpu : usr=43.41%, sys=2.75%, ctx=1279, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=76.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename0: (groupid=0, jobs=1): err= 0: pid=98636: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=187, BW=752KiB/s (770kB/s)(7556KiB/10048msec) 00:24:47.631 slat (usec): min=4, max=8025, avg=26.92, stdev=281.23 00:24:47.631 clat (msec): min=11, max=159, avg=84.87, stdev=22.43 00:24:47.631 lat (msec): min=11, max=159, avg=84.89, stdev=22.43 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 22], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 70], 00:24:47.631 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 96], 00:24:47.631 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 117], 00:24:47.631 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 161], 00:24:47.631 | 99.99th=[ 161] 00:24:47.631 bw ( KiB/s): min= 624, max= 968, per=4.11%, avg=748.85, stdev=117.15, samples=20 00:24:47.631 iops : min= 156, max= 242, avg=187.20, stdev=29.27, samples=20 00:24:47.631 lat (msec) : 20=0.95%, 50=7.68%, 100=60.35%, 250=31.02% 00:24:47.631 cpu : usr=38.86%, sys=2.71%, ctx=1144, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename0: (groupid=0, jobs=1): err= 0: pid=98637: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=197, BW=789KiB/s (808kB/s)(7896KiB/10009msec) 00:24:47.631 slat (usec): min=3, max=8026, avg=19.07, stdev=180.40 00:24:47.631 clat (msec): min=11, max=235, avg=81.02, stdev=26.05 00:24:47.631 lat (msec): min=11, max=235, avg=81.04, stdev=26.04 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:47.631 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 00:24:47.631 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 118], 00:24:47.631 | 99.00th=[ 132], 99.50th=[ 199], 99.90th=[ 236], 99.95th=[ 236], 00:24:47.631 | 99.99th=[ 236] 00:24:47.631 bw ( KiB/s): min= 496, max= 1080, per=4.23%, avg=769.63, stdev=168.07, samples=19 00:24:47.631 iops : min= 124, max= 270, avg=192.32, stdev=42.03, samples=19 00:24:47.631 lat (msec) : 20=0.66%, 50=13.37%, 100=59.07%, 250=26.90% 00:24:47.631 cpu : usr=33.39%, sys=1.97%, ctx=1035, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename1: (groupid=0, jobs=1): err= 0: pid=98638: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=183, BW=733KiB/s (750kB/s)(7328KiB/10004msec) 00:24:47.631 slat (usec): min=4, max=8024, avg=25.20, stdev=243.34 00:24:47.631 clat (msec): min=3, max=219, avg=87.21, stdev=28.18 00:24:47.631 lat (msec): min=3, max=219, avg=87.24, stdev=28.19 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 7], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 68], 00:24:47.631 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 100], 00:24:47.631 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 123], 00:24:47.631 | 99.00th=[ 155], 99.50th=[ 184], 99.90th=[ 220], 99.95th=[ 220], 00:24:47.631 | 99.99th=[ 220] 00:24:47.631 bw ( KiB/s): min= 496, max= 1024, per=3.84%, avg=698.53, stdev=156.29, samples=19 00:24:47.631 iops : min= 124, max= 256, avg=174.63, stdev=39.07, samples=19 00:24:47.631 lat (msec) : 4=0.33%, 10=1.42%, 20=0.49%, 50=7.64%, 100=53.22% 00:24:47.631 lat (msec) : 250=36.90% 00:24:47.631 cpu : usr=42.14%, sys=2.51%, ctx=1490, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=3.3%, 4=13.3%, 8=69.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename1: (groupid=0, jobs=1): err= 0: pid=98639: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=190, BW=762KiB/s (781kB/s)(7640KiB/10023msec) 00:24:47.631 slat (usec): min=4, max=8022, avg=20.87, stdev=204.96 00:24:47.631 clat (msec): min=31, max=154, avg=83.86, stdev=22.25 00:24:47.631 lat (msec): min=31, max=154, avg=83.88, stdev=22.26 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 67], 00:24:47.631 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 93], 00:24:47.631 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 118], 00:24:47.631 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 155], 00:24:47.631 | 99.99th=[ 155] 00:24:47.631 bw ( KiB/s): min= 528, max= 992, per=4.16%, avg=757.35, stdev=134.76, samples=20 00:24:47.631 iops : min= 132, max= 248, avg=189.30, stdev=33.72, samples=20 00:24:47.631 lat (msec) : 50=10.05%, 100=60.31%, 250=29.63% 00:24:47.631 cpu : usr=38.83%, sys=2.30%, ctx=1306, majf=0, minf=9 00:24:47.631 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.631 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.631 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.631 filename1: (groupid=0, jobs=1): err= 0: pid=98640: Tue Dec 10 10:38:21 2024 00:24:47.631 read: IOPS=178, BW=712KiB/s (729kB/s)(7160KiB/10055msec) 00:24:47.631 slat (usec): min=7, max=4027, avg=20.67, stdev=164.17 00:24:47.631 clat (msec): min=4, max=154, avg=89.57, stdev=28.26 00:24:47.631 lat (msec): min=4, max=154, avg=89.59, stdev=28.27 00:24:47.631 clat percentiles (msec): 00:24:47.631 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 59], 20.00th=[ 72], 00:24:47.631 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 103], 00:24:47.631 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 140], 00:24:47.631 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:24:47.631 | 99.99th=[ 155] 00:24:47.631 bw ( KiB/s): min= 512, max= 1253, per=3.91%, avg=711.35, stdev=195.63, samples=20 00:24:47.631 iops : min= 128, max= 313, avg=177.80, stdev=48.90, samples=20 00:24:47.631 lat (msec) : 10=3.58%, 20=0.11%, 50=4.80%, 100=50.00%, 250=41.51% 00:24:47.631 cpu : usr=38.04%, sys=2.50%, ctx=1252, majf=0, minf=0 00:24:47.631 IO depths : 1=0.1%, 2=4.2%, 4=16.6%, 8=65.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=92.2%, 8=4.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=1790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename1: (groupid=0, jobs=1): err= 0: pid=98641: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=200, BW=804KiB/s (823kB/s)(8040KiB/10004msec) 00:24:47.632 slat (usec): min=4, max=8025, avg=20.71, stdev=199.89 00:24:47.632 clat (msec): min=5, max=223, avg=79.53, stdev=25.78 00:24:47.632 lat (msec): min=5, max=223, avg=79.55, stdev=25.78 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 14], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:47.632 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:24:47.632 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 118], 00:24:47.632 | 99.00th=[ 131], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 224], 00:24:47.632 | 99.99th=[ 224] 00:24:47.632 bw ( KiB/s): min= 496, max= 1048, per=4.29%, avg=781.89, stdev=151.55, samples=19 00:24:47.632 iops : min= 124, max= 262, avg=195.47, stdev=37.89, samples=19 00:24:47.632 lat (msec) : 10=0.75%, 20=0.50%, 50=13.38%, 100=61.34%, 250=24.03% 00:24:47.632 cpu : usr=38.22%, sys=2.37%, ctx=1183, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename1: (groupid=0, jobs=1): err= 0: pid=98642: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=196, BW=787KiB/s (806kB/s)(7880KiB/10008msec) 00:24:47.632 slat (usec): min=4, max=8026, avg=28.14, stdev=312.44 00:24:47.632 clat (msec): min=10, max=231, avg=81.15, stdev=25.37 00:24:47.632 lat (msec): min=10, max=231, avg=81.17, stdev=25.37 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:24:47.632 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:24:47.632 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 121], 00:24:47.632 | 99.00th=[ 132], 99.50th=[ 197], 99.90th=[ 232], 99.95th=[ 232], 00:24:47.632 | 99.99th=[ 232] 00:24:47.632 bw ( KiB/s): min= 496, max= 992, per=4.21%, avg=765.05, stdev=142.18, samples=19 00:24:47.632 iops : min= 124, max= 248, avg=191.16, stdev=35.56, samples=19 00:24:47.632 lat (msec) : 20=0.66%, 50=13.20%, 100=61.07%, 250=25.08% 00:24:47.632 cpu : usr=32.07%, sys=1.85%, ctx=867, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename1: (groupid=0, jobs=1): err= 0: pid=98643: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=173, BW=692KiB/s (709kB/s)(6948KiB/10034msec) 00:24:47.632 slat (usec): min=4, max=12023, avg=34.30, stdev=381.88 00:24:47.632 clat (msec): min=38, max=157, avg=92.11, stdev=22.31 00:24:47.632 lat (msec): min=38, max=157, avg=92.15, stdev=22.31 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 67], 20.00th=[ 72], 00:24:47.632 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 104], 00:24:47.632 | 70.00th=[ 107], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 122], 00:24:47.632 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:24:47.632 | 99.99th=[ 159] 00:24:47.632 bw ( KiB/s): min= 512, max= 952, per=3.79%, avg=689.65, stdev=146.16, samples=20 00:24:47.632 iops : min= 128, max= 238, avg=172.40, stdev=36.55, samples=20 00:24:47.632 lat (msec) : 50=3.22%, 100=52.56%, 250=44.21% 00:24:47.632 cpu : usr=45.55%, sys=2.60%, ctx=1889, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=4.0%, 4=16.2%, 8=65.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=92.0%, 8=4.4%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename1: (groupid=0, jobs=1): err= 0: pid=98644: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=192, BW=770KiB/s (788kB/s)(7704KiB/10006msec) 00:24:47.632 slat (usec): min=4, max=8035, avg=19.24, stdev=182.83 00:24:47.632 clat (msec): min=16, max=237, avg=83.03, stdev=25.94 00:24:47.632 lat (msec): min=16, max=237, avg=83.05, stdev=25.94 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:24:47.632 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 86], 00:24:47.632 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 121], 00:24:47.632 | 99.00th=[ 144], 99.50th=[ 201], 99.90th=[ 239], 99.95th=[ 239], 00:24:47.632 | 99.99th=[ 239] 00:24:47.632 bw ( KiB/s): min= 448, max= 1048, per=4.13%, avg=752.42, stdev=162.46, samples=19 00:24:47.632 iops : min= 112, max= 262, avg=188.00, stdev=40.63, samples=19 00:24:47.632 lat (msec) : 20=0.31%, 50=10.59%, 100=60.64%, 250=28.45% 00:24:47.632 cpu : usr=35.25%, sys=2.31%, ctx=1070, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=1926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename1: (groupid=0, jobs=1): err= 0: pid=98645: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=204, BW=817KiB/s (837kB/s)(8172KiB/10001msec) 00:24:47.632 slat (usec): min=3, max=8034, avg=27.32, stdev=288.03 00:24:47.632 clat (usec): min=1498, max=223876, avg=78220.93, stdev=26866.47 00:24:47.632 lat (usec): min=1505, max=223888, avg=78248.25, stdev=26863.72 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 5], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:24:47.632 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:24:47.632 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 118], 00:24:47.632 | 99.00th=[ 129], 99.50th=[ 188], 99.90th=[ 188], 99.95th=[ 224], 00:24:47.632 | 99.99th=[ 224] 00:24:47.632 bw ( KiB/s): min= 496, max= 1024, per=4.30%, avg=782.74, stdev=144.57, samples=19 00:24:47.632 iops : min= 124, max= 256, avg=195.68, stdev=36.14, samples=19 00:24:47.632 lat (msec) : 2=0.59%, 4=0.34%, 10=0.98%, 20=0.59%, 50=12.43% 00:24:47.632 lat (msec) : 100=61.28%, 250=23.79% 00:24:47.632 cpu : usr=38.20%, sys=2.47%, ctx=1324, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename2: (groupid=0, jobs=1): err= 0: pid=98646: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=187, BW=749KiB/s (767kB/s)(7528KiB/10049msec) 00:24:47.632 slat (usec): min=4, max=8029, avg=34.88, stdev=412.48 00:24:47.632 clat (msec): min=8, max=149, avg=85.19, stdev=24.20 00:24:47.632 lat (msec): min=8, max=149, avg=85.22, stdev=24.20 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 71], 00:24:47.632 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 95], 00:24:47.632 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 114], 95.00th=[ 121], 00:24:47.632 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 150], 00:24:47.632 | 99.99th=[ 150] 00:24:47.632 bw ( KiB/s): min= 528, max= 1010, per=4.10%, avg=746.40, stdev=145.28, samples=20 00:24:47.632 iops : min= 132, max= 252, avg=186.55, stdev=36.29, samples=20 00:24:47.632 lat (msec) : 10=1.49%, 20=0.32%, 50=6.70%, 100=58.55%, 250=32.94% 00:24:47.632 cpu : usr=32.83%, sys=1.81%, ctx=927, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=78.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=88.9%, 8=10.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=1882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename2: (groupid=0, jobs=1): err= 0: pid=98647: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=188, BW=755KiB/s (773kB/s)(7556KiB/10009msec) 00:24:47.632 slat (usec): min=3, max=12025, avg=29.95, stdev=379.88 00:24:47.632 clat (msec): min=15, max=187, avg=84.61, stdev=25.03 00:24:47.632 lat (msec): min=15, max=188, avg=84.64, stdev=25.03 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 62], 00:24:47.632 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 95], 00:24:47.632 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 123], 00:24:47.632 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 188], 99.95th=[ 188], 00:24:47.632 | 99.99th=[ 188] 00:24:47.632 bw ( KiB/s): min= 512, max= 1000, per=4.04%, avg=735.00, stdev=142.22, samples=19 00:24:47.632 iops : min= 128, max= 250, avg=183.63, stdev=35.61, samples=19 00:24:47.632 lat (msec) : 20=0.37%, 50=11.65%, 100=57.23%, 250=30.76% 00:24:47.632 cpu : usr=33.08%, sys=1.86%, ctx=892, majf=0, minf=9 00:24:47.632 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:47.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.632 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.632 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.632 filename2: (groupid=0, jobs=1): err= 0: pid=98648: Tue Dec 10 10:38:21 2024 00:24:47.632 read: IOPS=178, BW=715KiB/s (732kB/s)(7156KiB/10013msec) 00:24:47.632 slat (usec): min=3, max=12025, avg=25.45, stdev=341.29 00:24:47.632 clat (msec): min=23, max=201, avg=89.37, stdev=24.73 00:24:47.632 lat (msec): min=23, max=201, avg=89.39, stdev=24.73 00:24:47.632 clat percentiles (msec): 00:24:47.632 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 72], 00:24:47.632 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 97], 00:24:47.632 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 132], 00:24:47.632 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 203], 99.95th=[ 203], 00:24:47.632 | 99.99th=[ 203] 00:24:47.632 bw ( KiB/s): min= 400, max= 1024, per=3.91%, avg=711.90, stdev=167.28, samples=20 00:24:47.632 iops : min= 100, max= 256, avg=177.95, stdev=41.83, samples=20 00:24:47.632 lat (msec) : 50=9.11%, 100=54.78%, 250=36.11% 00:24:47.633 cpu : usr=32.18%, sys=2.04%, ctx=876, majf=0, minf=9 00:24:47.633 IO depths : 1=0.1%, 2=3.3%, 4=13.3%, 8=69.3%, 16=14.1%, 32=0.0%, >=64=0.0% 00:24:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 complete : 0=0.0%, 4=90.7%, 8=6.3%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 issued rwts: total=1789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.633 filename2: (groupid=0, jobs=1): err= 0: pid=98649: Tue Dec 10 10:38:21 2024 00:24:47.633 read: IOPS=188, BW=752KiB/s (770kB/s)(7544KiB/10028msec) 00:24:47.633 slat (usec): min=4, max=8025, avg=18.82, stdev=184.52 00:24:47.633 clat (msec): min=35, max=148, avg=84.93, stdev=22.34 00:24:47.633 lat (msec): min=35, max=148, avg=84.94, stdev=22.34 00:24:47.633 clat percentiles (msec): 00:24:47.633 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 70], 00:24:47.633 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 93], 00:24:47.633 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 120], 00:24:47.633 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:24:47.633 | 99.99th=[ 148] 00:24:47.633 bw ( KiB/s): min= 608, max= 1024, per=4.12%, avg=750.30, stdev=137.33, samples=20 00:24:47.633 iops : min= 152, max= 256, avg=187.55, stdev=34.35, samples=20 00:24:47.633 lat (msec) : 50=9.65%, 100=59.17%, 250=31.18% 00:24:47.633 cpu : usr=34.50%, sys=2.34%, ctx=1003, majf=0, minf=9 00:24:47.633 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 issued rwts: total=1886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.633 filename2: (groupid=0, jobs=1): err= 0: pid=98650: Tue Dec 10 10:38:21 2024 00:24:47.633 read: IOPS=187, BW=749KiB/s (767kB/s)(7512KiB/10024msec) 00:24:47.633 slat (usec): min=4, max=8023, avg=22.95, stdev=261.34 00:24:47.633 clat (msec): min=30, max=155, avg=85.29, stdev=22.42 00:24:47.633 lat (msec): min=30, max=155, avg=85.31, stdev=22.42 00:24:47.633 clat percentiles (msec): 00:24:47.633 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 71], 00:24:47.633 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 96], 00:24:47.633 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 121], 00:24:47.633 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 157], 00:24:47.633 | 99.99th=[ 157] 00:24:47.633 bw ( KiB/s): min= 507, max= 1000, per=4.09%, avg=744.45, stdev=145.81, samples=20 00:24:47.633 iops : min= 126, max= 250, avg=186.05, stdev=36.52, samples=20 00:24:47.633 lat (msec) : 50=10.22%, 100=57.93%, 250=31.84% 00:24:47.633 cpu : usr=31.14%, sys=2.09%, ctx=843, majf=0, minf=9 00:24:47.633 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 issued rwts: total=1878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.633 filename2: (groupid=0, jobs=1): err= 0: pid=98651: Tue Dec 10 10:38:21 2024 00:24:47.633 read: IOPS=193, BW=775KiB/s (793kB/s)(7776KiB/10035msec) 00:24:47.633 slat (usec): min=4, max=8030, avg=26.12, stdev=234.55 00:24:47.633 clat (msec): min=34, max=143, avg=82.39, stdev=22.13 00:24:47.633 lat (msec): min=34, max=143, avg=82.41, stdev=22.13 00:24:47.633 clat percentiles (msec): 00:24:47.633 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 64], 00:24:47.633 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 85], 00:24:47.633 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 121], 00:24:47.633 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 144], 00:24:47.633 | 99.99th=[ 144] 00:24:47.633 bw ( KiB/s): min= 608, max= 992, per=4.24%, avg=771.20, stdev=129.29, samples=20 00:24:47.633 iops : min= 152, max= 248, avg=192.80, stdev=32.32, samples=20 00:24:47.633 lat (msec) : 50=9.41%, 100=63.43%, 250=27.16% 00:24:47.633 cpu : usr=43.32%, sys=2.68%, ctx=1321, majf=0, minf=9 00:24:47.633 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.633 filename2: (groupid=0, jobs=1): err= 0: pid=98652: Tue Dec 10 10:38:21 2024 00:24:47.633 read: IOPS=185, BW=744KiB/s (761kB/s)(7456KiB/10027msec) 00:24:47.633 slat (usec): min=6, max=8030, avg=25.63, stdev=286.51 00:24:47.633 clat (msec): min=37, max=151, avg=85.86, stdev=22.78 00:24:47.633 lat (msec): min=37, max=151, avg=85.88, stdev=22.79 00:24:47.633 clat percentiles (msec): 00:24:47.633 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 70], 00:24:47.633 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 95], 00:24:47.633 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 121], 00:24:47.633 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:24:47.633 | 99.99th=[ 153] 00:24:47.633 bw ( KiB/s): min= 544, max= 1016, per=4.07%, avg=741.55, stdev=149.95, samples=20 00:24:47.633 iops : min= 136, max= 254, avg=185.35, stdev=37.52, samples=20 00:24:47.633 lat (msec) : 50=9.66%, 100=55.90%, 250=34.44% 00:24:47.633 cpu : usr=33.67%, sys=1.82%, ctx=1000, majf=0, minf=9 00:24:47.633 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.633 filename2: (groupid=0, jobs=1): err= 0: pid=98653: Tue Dec 10 10:38:21 2024 00:24:47.633 read: IOPS=197, BW=789KiB/s (808kB/s)(7916KiB/10028msec) 00:24:47.633 slat (usec): min=3, max=8025, avg=25.80, stdev=247.58 00:24:47.633 clat (msec): min=33, max=144, avg=80.89, stdev=22.32 00:24:47.633 lat (msec): min=33, max=144, avg=80.92, stdev=22.32 00:24:47.633 clat percentiles (msec): 00:24:47.633 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:24:47.633 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:24:47.633 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 118], 00:24:47.633 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:24:47.633 | 99.99th=[ 144] 00:24:47.633 bw ( KiB/s): min= 496, max= 1048, per=4.33%, avg=787.50, stdev=144.48, samples=20 00:24:47.633 iops : min= 124, max= 262, avg=196.85, stdev=36.14, samples=20 00:24:47.633 lat (msec) : 50=10.11%, 100=63.57%, 250=26.33% 00:24:47.633 cpu : usr=40.28%, sys=2.52%, ctx=1312, majf=0, minf=9 00:24:47.633 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:47.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.633 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.633 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:47.633 00:24:47.633 Run status group 0 (all jobs): 00:24:47.633 READ: bw=17.8MiB/s (18.6MB/s), 692KiB/s-817KiB/s (709kB/s-837kB/s), io=179MiB (187MB), run=10001-10055msec 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:47.633 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 bdev_null0 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 [2024-12-10 10:38:21.483335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 bdev_null1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:47.634 { 00:24:47.634 "params": { 00:24:47.634 "name": "Nvme$subsystem", 00:24:47.634 "trtype": "$TEST_TRANSPORT", 00:24:47.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.634 "adrfam": "ipv4", 00:24:47.634 "trsvcid": "$NVMF_PORT", 00:24:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.634 "hdgst": ${hdgst:-false}, 00:24:47.634 "ddgst": ${ddgst:-false} 00:24:47.634 }, 00:24:47.634 "method": "bdev_nvme_attach_controller" 00:24:47.634 } 00:24:47.634 EOF 00:24:47.634 )") 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:47.634 { 00:24:47.634 "params": { 00:24:47.634 "name": "Nvme$subsystem", 00:24:47.634 "trtype": "$TEST_TRANSPORT", 00:24:47.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.634 "adrfam": "ipv4", 00:24:47.634 "trsvcid": "$NVMF_PORT", 00:24:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.634 "hdgst": ${hdgst:-false}, 00:24:47.634 "ddgst": ${ddgst:-false} 00:24:47.634 }, 00:24:47.634 "method": "bdev_nvme_attach_controller" 00:24:47.634 } 00:24:47.634 EOF 00:24:47.634 )") 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:47.634 "params": { 00:24:47.634 "name": "Nvme0", 00:24:47.634 "trtype": "tcp", 00:24:47.634 "traddr": "10.0.0.3", 00:24:47.634 "adrfam": "ipv4", 00:24:47.634 "trsvcid": "4420", 00:24:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:47.634 "hdgst": false, 00:24:47.634 "ddgst": false 00:24:47.634 }, 00:24:47.634 "method": "bdev_nvme_attach_controller" 00:24:47.634 },{ 00:24:47.634 "params": { 00:24:47.634 "name": "Nvme1", 00:24:47.634 "trtype": "tcp", 00:24:47.634 "traddr": "10.0.0.3", 00:24:47.634 "adrfam": "ipv4", 00:24:47.634 "trsvcid": "4420", 00:24:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.634 "hdgst": false, 00:24:47.634 "ddgst": false 00:24:47.634 }, 00:24:47.634 "method": "bdev_nvme_attach_controller" 00:24:47.634 }' 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:47.634 10:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:47.634 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:47.634 ... 00:24:47.634 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:47.634 ... 00:24:47.634 fio-3.35 00:24:47.634 Starting 4 threads 00:24:52.906 00:24:52.906 filename0: (groupid=0, jobs=1): err= 0: pid=98794: Tue Dec 10 10:38:27 2024 00:24:52.906 read: IOPS=2073, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5002msec) 00:24:52.906 slat (nsec): min=6624, max=61558, avg=14653.07, stdev=4863.80 00:24:52.906 clat (usec): min=888, max=6797, avg=3803.89, stdev=430.43 00:24:52.906 lat (usec): min=901, max=6811, avg=3818.55, stdev=430.68 00:24:52.906 clat percentiles (usec): 00:24:52.906 | 1.00th=[ 2040], 5.00th=[ 2835], 10.00th=[ 3556], 20.00th=[ 3752], 00:24:52.906 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:24:52.906 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4293], 00:24:52.906 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 4752], 99.95th=[ 4817], 00:24:52.906 | 99.99th=[ 5014] 00:24:52.906 bw ( KiB/s): min=15872, max=17920, per=23.20%, avg=16355.56, stdev=603.02, samples=9 00:24:52.906 iops : min= 1984, max= 2240, avg=2044.44, stdev=75.38, samples=9 00:24:52.906 lat (usec) : 1000=0.10% 00:24:52.906 lat (msec) : 2=0.66%, 4=77.70%, 10=21.54% 00:24:52.906 cpu : usr=90.78%, sys=8.40%, ctx=8, majf=0, minf=0 00:24:52.906 IO depths : 1=0.1%, 2=22.5%, 4=51.4%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 issued rwts: total=10370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.906 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:52.906 filename0: (groupid=0, jobs=1): err= 0: pid=98795: Tue Dec 10 10:38:27 2024 00:24:52.906 read: IOPS=2711, BW=21.2MiB/s (22.2MB/s)(106MiB/5003msec) 00:24:52.906 slat (nsec): min=4930, max=56180, avg=9955.55, stdev=3977.43 00:24:52.906 clat (usec): min=577, max=12253, avg=2924.35, stdev=1036.72 00:24:52.906 lat (usec): min=585, max=12270, avg=2934.30, stdev=1036.71 00:24:52.906 clat percentiles (usec): 00:24:52.906 | 1.00th=[ 1221], 5.00th=[ 1270], 10.00th=[ 1303], 20.00th=[ 1385], 00:24:52.906 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 3261], 60.00th=[ 3589], 00:24:52.906 | 70.00th=[ 3687], 80.00th=[ 3785], 90.00th=[ 3916], 95.00th=[ 4080], 00:24:52.906 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4621], 99.95th=[11994], 00:24:52.906 | 99.99th=[12125] 00:24:52.906 bw ( KiB/s): min=20896, max=22608, per=31.29%, avg=22057.33, stdev=510.64, samples=9 00:24:52.906 iops : min= 2612, max= 2826, avg=2757.11, stdev=63.87, samples=9 00:24:52.906 lat (usec) : 750=0.18%, 1000=0.18% 00:24:52.906 lat (msec) : 2=23.83%, 4=68.64%, 10=7.11%, 20=0.06% 00:24:52.906 cpu : usr=89.74%, sys=9.22%, ctx=6, majf=0, minf=0 00:24:52.906 IO depths : 1=0.1%, 2=1.5%, 4=62.9%, 8=35.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 issued rwts: total=13567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.906 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:52.906 filename1: (groupid=0, jobs=1): err= 0: pid=98796: Tue Dec 10 10:38:27 2024 00:24:52.906 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5002msec) 00:24:52.906 slat (nsec): min=6970, max=53716, avg=14868.04, stdev=4575.29 00:24:52.906 clat (usec): min=2082, max=5545, avg=3914.77, stdev=232.90 00:24:52.906 lat (usec): min=2094, max=5559, avg=3929.63, stdev=233.16 00:24:52.906 clat percentiles (usec): 00:24:52.906 | 1.00th=[ 3458], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3785], 00:24:52.906 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:24:52.906 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4359], 00:24:52.906 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5014], 99.95th=[ 5014], 00:24:52.906 | 99.99th=[ 5080] 00:24:52.906 bw ( KiB/s): min=15104, max=16384, per=22.76%, avg=16046.22, stdev=379.68, samples=9 00:24:52.906 iops : min= 1888, max= 2048, avg=2005.78, stdev=47.46, samples=9 00:24:52.906 lat (msec) : 4=74.40%, 10=25.60% 00:24:52.906 cpu : usr=90.56%, sys=8.68%, ctx=3, majf=0, minf=9 00:24:52.906 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 issued rwts: total=10072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.906 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:52.906 filename1: (groupid=0, jobs=1): err= 0: pid=98797: Tue Dec 10 10:38:27 2024 00:24:52.906 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5002msec) 00:24:52.906 slat (nsec): min=6748, max=53833, avg=15041.53, stdev=4688.21 00:24:52.906 clat (usec): min=2083, max=5552, avg=3913.95, stdev=233.36 00:24:52.906 lat (usec): min=2097, max=5564, avg=3928.99, stdev=233.67 00:24:52.906 clat percentiles (usec): 00:24:52.906 | 1.00th=[ 3458], 5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3785], 00:24:52.906 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3851], 60.00th=[ 3916], 00:24:52.906 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4359], 00:24:52.906 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5014], 99.95th=[ 5014], 00:24:52.906 | 99.99th=[ 5080] 00:24:52.906 bw ( KiB/s): min=15104, max=16384, per=22.76%, avg=16042.67, stdev=378.63, samples=9 00:24:52.906 iops : min= 1888, max= 2048, avg=2005.33, stdev=47.33, samples=9 00:24:52.906 lat (msec) : 4=74.35%, 10=25.65% 00:24:52.906 cpu : usr=89.64%, sys=9.62%, ctx=8, majf=0, minf=9 00:24:52.906 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:52.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.906 issued rwts: total=10072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.906 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:52.906 00:24:52.906 Run status group 0 (all jobs): 00:24:52.906 READ: bw=68.8MiB/s (72.2MB/s), 15.7MiB/s-21.2MiB/s (16.5MB/s-22.2MB/s), io=344MiB (361MB), run=5002-5003msec 00:24:52.906 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 ************************************ 00:24:52.907 END TEST fio_dif_rand_params 00:24:52.907 ************************************ 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 00:24:52.907 real 0m23.072s 00:24:52.907 user 2m2.653s 00:24:52.907 sys 0m9.406s 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 10:38:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:52.907 10:38:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:52.907 10:38:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 ************************************ 00:24:52.907 START TEST fio_dif_digest 00:24:52.907 ************************************ 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 bdev_null0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:52.907 [2024-12-10 10:38:27.487027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.907 { 00:24:52.907 "params": { 00:24:52.907 "name": "Nvme$subsystem", 00:24:52.907 "trtype": "$TEST_TRANSPORT", 00:24:52.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.907 "adrfam": "ipv4", 00:24:52.907 "trsvcid": "$NVMF_PORT", 00:24:52.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.907 "hdgst": ${hdgst:-false}, 00:24:52.907 "ddgst": ${ddgst:-false} 00:24:52.907 }, 00:24:52.907 "method": "bdev_nvme_attach_controller" 00:24:52.907 } 00:24:52.907 EOF 00:24:52.907 )") 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:52.907 "params": { 00:24:52.907 "name": "Nvme0", 00:24:52.907 "trtype": "tcp", 00:24:52.907 "traddr": "10.0.0.3", 00:24:52.907 "adrfam": "ipv4", 00:24:52.907 "trsvcid": "4420", 00:24:52.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:52.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:52.907 "hdgst": true, 00:24:52.907 "ddgst": true 00:24:52.907 }, 00:24:52.907 "method": "bdev_nvme_attach_controller" 00:24:52.907 }' 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:52.907 10:38:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:52.907 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:52.907 ... 00:24:52.907 fio-3.35 00:24:52.907 Starting 3 threads 00:25:05.112 00:25:05.112 filename0: (groupid=0, jobs=1): err= 0: pid=98903: Tue Dec 10 10:38:38 2024 00:25:05.112 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10006msec) 00:25:05.112 slat (nsec): min=6824, max=35785, avg=9390.05, stdev=3455.44 00:25:05.112 clat (usec): min=10964, max=14083, avg=11832.56, stdev=461.57 00:25:05.112 lat (usec): min=10971, max=14096, avg=11841.95, stdev=461.81 00:25:05.112 clat percentiles (usec): 00:25:05.112 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:25:05.112 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:25:05.112 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12911], 00:25:05.112 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14091], 99.95th=[14091], 00:25:05.112 | 99.99th=[14091] 00:25:05.112 bw ( KiB/s): min=31488, max=33024, per=33.36%, avg=32417.68, stdev=547.80, samples=19 00:25:05.112 iops : min= 246, max= 258, avg=253.26, stdev= 4.28, samples=19 00:25:05.112 lat (msec) : 20=100.00% 00:25:05.112 cpu : usr=90.58%, sys=8.90%, ctx=16, majf=0, minf=9 00:25:05.112 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:05.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.112 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:05.112 filename0: (groupid=0, jobs=1): err= 0: pid=98904: Tue Dec 10 10:38:38 2024 00:25:05.112 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10005msec) 00:25:05.112 slat (nsec): min=6774, max=52971, avg=9605.99, stdev=3979.32 00:25:05.112 clat (usec): min=8319, max=13987, avg=11830.03, stdev=480.96 00:25:05.112 lat (usec): min=8326, max=14000, avg=11839.63, stdev=481.36 00:25:05.112 clat percentiles (usec): 00:25:05.112 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:25:05.112 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:25:05.112 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:25:05.112 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13960], 99.95th=[13960], 00:25:05.112 | 99.99th=[13960] 00:25:05.112 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32377.26, stdev=462.44, samples=19 00:25:05.112 iops : min= 246, max= 258, avg=252.95, stdev= 3.61, samples=19 00:25:05.112 lat (msec) : 10=0.12%, 20=99.88% 00:25:05.112 cpu : usr=91.08%, sys=8.37%, ctx=26, majf=0, minf=0 00:25:05.112 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:05.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.112 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:05.112 filename0: (groupid=0, jobs=1): err= 0: pid=98905: Tue Dec 10 10:38:38 2024 00:25:05.112 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10005msec) 00:25:05.112 slat (nsec): min=6768, max=80197, avg=9487.36, stdev=3952.52 00:25:05.112 clat (usec): min=7010, max=15088, avg=11830.42, stdev=500.19 00:25:05.112 lat (usec): min=7017, max=15122, avg=11839.91, stdev=500.56 00:25:05.112 clat percentiles (usec): 00:25:05.112 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:25:05.112 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:25:05.112 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:25:05.112 | 99.00th=[13566], 99.50th=[13698], 99.90th=[15008], 99.95th=[15008], 00:25:05.112 | 99.99th=[15139] 00:25:05.112 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32377.26, stdev=462.44, samples=19 00:25:05.112 iops : min= 246, max= 258, avg=252.95, stdev= 3.61, samples=19 00:25:05.112 lat (msec) : 10=0.12%, 20=99.88% 00:25:05.112 cpu : usr=91.52%, sys=7.93%, ctx=10, majf=0, minf=0 00:25:05.112 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:05.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.112 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:05.112 00:25:05.112 Run status group 0 (all jobs): 00:25:05.112 READ: bw=94.9MiB/s (99.5MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=950MiB (996MB), run=10005-10006msec 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.112 00:25:05.112 real 0m10.859s 00:25:05.112 user 0m27.905s 00:25:05.112 sys 0m2.740s 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.112 10:38:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.112 ************************************ 00:25:05.112 END TEST fio_dif_digest 00:25:05.112 ************************************ 00:25:05.112 10:38:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:05.112 10:38:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.112 rmmod nvme_tcp 00:25:05.112 rmmod nvme_fabrics 00:25:05.112 rmmod nvme_keyring 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 98155 ']' 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 98155 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 98155 ']' 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 98155 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98155 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.112 killing process with pid 98155 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98155' 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@969 -- # kill 98155 00:25:05.112 10:38:38 nvmf_dif -- common/autotest_common.sh@974 -- # wait 98155 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:25:05.112 10:38:38 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:05.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:05.112 Waiting for block devices as requested 00:25:05.112 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:05.112 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:05.112 10:38:39 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.113 10:38:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:05.113 10:38:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.113 10:38:39 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:25:05.113 00:25:05.113 real 0m59.171s 00:25:05.113 user 3m45.547s 00:25:05.113 sys 0m20.581s 00:25:05.113 10:38:39 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.113 10:38:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:05.113 ************************************ 00:25:05.113 END TEST nvmf_dif 00:25:05.113 ************************************ 00:25:05.113 10:38:39 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:05.113 10:38:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:05.113 10:38:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.113 10:38:39 -- common/autotest_common.sh@10 -- # set +x 00:25:05.113 ************************************ 00:25:05.113 START TEST nvmf_abort_qd_sizes 00:25:05.113 ************************************ 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:05.113 * Looking for test storage... 00:25:05.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.113 --rc genhtml_branch_coverage=1 00:25:05.113 --rc genhtml_function_coverage=1 00:25:05.113 --rc genhtml_legend=1 00:25:05.113 --rc geninfo_all_blocks=1 00:25:05.113 --rc geninfo_unexecuted_blocks=1 00:25:05.113 00:25:05.113 ' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.113 --rc genhtml_branch_coverage=1 00:25:05.113 --rc genhtml_function_coverage=1 00:25:05.113 --rc genhtml_legend=1 00:25:05.113 --rc geninfo_all_blocks=1 00:25:05.113 --rc geninfo_unexecuted_blocks=1 00:25:05.113 00:25:05.113 ' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.113 --rc genhtml_branch_coverage=1 00:25:05.113 --rc genhtml_function_coverage=1 00:25:05.113 --rc genhtml_legend=1 00:25:05.113 --rc geninfo_all_blocks=1 00:25:05.113 --rc geninfo_unexecuted_blocks=1 00:25:05.113 00:25:05.113 ' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.113 --rc genhtml_branch_coverage=1 00:25:05.113 --rc genhtml_function_coverage=1 00:25:05.113 --rc genhtml_legend=1 00:25:05.113 --rc geninfo_all_blocks=1 00:25:05.113 --rc geninfo_unexecuted_blocks=1 00:25:05.113 00:25:05.113 ' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.113 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:05.113 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:05.114 Cannot find device "nvmf_init_br" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:05.114 Cannot find device "nvmf_init_br2" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:05.114 Cannot find device "nvmf_tgt_br" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:05.114 Cannot find device "nvmf_tgt_br2" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:05.114 Cannot find device "nvmf_init_br" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:05.114 Cannot find device "nvmf_init_br2" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:05.114 Cannot find device "nvmf_tgt_br" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:05.114 Cannot find device "nvmf_tgt_br2" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:05.114 Cannot find device "nvmf_br" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:05.114 Cannot find device "nvmf_init_if" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:05.114 Cannot find device "nvmf_init_if2" 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:05.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:05.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:05.114 10:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:05.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:05.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:25:05.114 00:25:05.114 --- 10.0.0.3 ping statistics --- 00:25:05.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.114 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:05.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:05.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:25:05.114 00:25:05.114 --- 10.0.0.4 ping statistics --- 00:25:05.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.114 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:05.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:05.114 00:25:05.114 --- 10.0.0.1 ping statistics --- 00:25:05.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.114 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:05.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:25:05.114 00:25:05.114 --- 10.0.0.2 ping statistics --- 00:25:05.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.114 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:25:05.114 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:05.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:05.682 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:05.682 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=99544 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 99544 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 99544 ']' 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.944 10:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:05.944 [2024-12-10 10:38:41.029600] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:05.944 [2024-12-10 10:38:41.029696] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.205 [2024-12-10 10:38:41.171692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.205 [2024-12-10 10:38:41.218595] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.205 [2024-12-10 10:38:41.218902] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.205 [2024-12-10 10:38:41.219082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.205 [2024-12-10 10:38:41.219233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.205 [2024-12-10 10:38:41.219255] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.205 [2024-12-10 10:38:41.219431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.205 [2024-12-10 10:38:41.219561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.205 [2024-12-10 10:38:41.220141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.205 [2024-12-10 10:38:41.220182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.205 [2024-12-10 10:38:41.257641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:06.205 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.206 10:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:06.206 ************************************ 00:25:06.206 START TEST spdk_target_abort 00:25:06.206 ************************************ 00:25:06.206 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:25:06.206 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:06.206 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:06.206 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.206 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:06.464 spdk_targetn1 00:25:06.464 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:06.465 [2024-12-10 10:38:41.472078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:06.465 [2024-12-10 10:38:41.504373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:06.465 10:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:09.751 Initializing NVMe Controllers 00:25:09.751 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:09.751 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:09.751 Initialization complete. Launching workers. 00:25:09.751 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9802, failed: 0 00:25:09.751 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1094, failed to submit 8708 00:25:09.751 success 915, unsuccessful 179, failed 0 00:25:09.751 10:38:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:09.751 10:38:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:13.039 Initializing NVMe Controllers 00:25:13.039 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:13.039 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:13.039 Initialization complete. Launching workers. 00:25:13.039 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8896, failed: 0 00:25:13.039 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1173, failed to submit 7723 00:25:13.039 success 381, unsuccessful 792, failed 0 00:25:13.039 10:38:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:13.039 10:38:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:16.327 Initializing NVMe Controllers 00:25:16.327 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:16.327 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:16.327 Initialization complete. Launching workers. 00:25:16.327 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31802, failed: 0 00:25:16.327 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2329, failed to submit 29473 00:25:16.327 success 474, unsuccessful 1855, failed 0 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.327 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:16.585 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.585 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99544 00:25:16.585 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 99544 ']' 00:25:16.585 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 99544 00:25:16.585 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99544 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.586 killing process with pid 99544 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99544' 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 99544 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 99544 00:25:16.586 00:25:16.586 real 0m10.382s 00:25:16.586 user 0m39.854s 00:25:16.586 sys 0m2.002s 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:16.586 ************************************ 00:25:16.586 END TEST spdk_target_abort 00:25:16.586 10:38:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:16.586 ************************************ 00:25:16.844 10:38:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:16.844 10:38:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:16.844 10:38:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:16.844 10:38:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:16.844 ************************************ 00:25:16.844 START TEST kernel_target_abort 00:25:16.844 ************************************ 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:16.844 10:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:17.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:17.103 Waiting for block devices as requested 00:25:17.103 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.362 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:17.362 No valid GPT data, bailing 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:17.362 No valid GPT data, bailing 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:17.362 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:17.621 No valid GPT data, bailing 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:17.621 No valid GPT data, bailing 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:17.621 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a --hostid=495b1d55-bad1-4013-8ca4-4675b1022b7a -a 10.0.0.1 -t tcp -s 4420 00:25:17.622 00:25:17.622 Discovery Log Number of Records 2, Generation counter 2 00:25:17.622 =====Discovery Log Entry 0====== 00:25:17.622 trtype: tcp 00:25:17.622 adrfam: ipv4 00:25:17.622 subtype: current discovery subsystem 00:25:17.622 treq: not specified, sq flow control disable supported 00:25:17.622 portid: 1 00:25:17.622 trsvcid: 4420 00:25:17.622 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:17.622 traddr: 10.0.0.1 00:25:17.622 eflags: none 00:25:17.622 sectype: none 00:25:17.622 =====Discovery Log Entry 1====== 00:25:17.622 trtype: tcp 00:25:17.622 adrfam: ipv4 00:25:17.622 subtype: nvme subsystem 00:25:17.622 treq: not specified, sq flow control disable supported 00:25:17.622 portid: 1 00:25:17.622 trsvcid: 4420 00:25:17.622 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:17.622 traddr: 10.0.0.1 00:25:17.622 eflags: none 00:25:17.622 sectype: none 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:17.622 10:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:20.963 Initializing NVMe Controllers 00:25:20.963 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:20.963 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:20.963 Initialization complete. Launching workers. 00:25:20.963 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33372, failed: 0 00:25:20.963 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33372, failed to submit 0 00:25:20.963 success 0, unsuccessful 33372, failed 0 00:25:20.963 10:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:20.963 10:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:24.251 Initializing NVMe Controllers 00:25:24.251 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:24.251 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:24.251 Initialization complete. Launching workers. 00:25:24.251 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63747, failed: 0 00:25:24.251 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25576, failed to submit 38171 00:25:24.251 success 0, unsuccessful 25576, failed 0 00:25:24.251 10:38:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:24.251 10:38:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:27.539 Initializing NVMe Controllers 00:25:27.539 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:27.539 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:27.539 Initialization complete. Launching workers. 00:25:27.539 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68340, failed: 0 00:25:27.539 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17052, failed to submit 51288 00:25:27.539 success 0, unsuccessful 17052, failed 0 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:25:27.539 10:39:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:27.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:28.734 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:28.734 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:28.734 00:25:28.734 real 0m11.913s 00:25:28.734 user 0m5.665s 00:25:28.734 sys 0m3.588s 00:25:28.734 10:39:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:28.734 ************************************ 00:25:28.734 END TEST kernel_target_abort 00:25:28.734 ************************************ 00:25:28.734 10:39:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.734 rmmod nvme_tcp 00:25:28.734 rmmod nvme_fabrics 00:25:28.734 rmmod nvme_keyring 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 99544 ']' 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 99544 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 99544 ']' 00:25:28.734 10:39:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 99544 00:25:28.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99544) - No such process 00:25:28.735 Process with pid 99544 is not found 00:25:28.735 10:39:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 99544 is not found' 00:25:28.735 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:25:28.735 10:39:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:29.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:29.302 Waiting for block devices as requested 00:25:29.302 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:29.302 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:29.562 00:25:29.562 real 0m25.324s 00:25:29.562 user 0m46.675s 00:25:29.562 sys 0m7.065s 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.562 ************************************ 00:25:29.562 END TEST nvmf_abort_qd_sizes 00:25:29.562 ************************************ 00:25:29.562 10:39:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:29.821 10:39:04 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:29.821 10:39:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:29.821 10:39:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:29.821 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:25:29.821 ************************************ 00:25:29.821 START TEST keyring_file 00:25:29.821 ************************************ 00:25:29.821 10:39:04 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:29.821 * Looking for test storage... 00:25:29.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:29.821 10:39:04 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:29.822 10:39:04 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:25:29.822 10:39:04 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:29.822 10:39:04 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:29.822 10:39:04 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:29.822 10:39:05 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.822 10:39:05 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.822 --rc genhtml_branch_coverage=1 00:25:29.822 --rc genhtml_function_coverage=1 00:25:29.822 --rc genhtml_legend=1 00:25:29.822 --rc geninfo_all_blocks=1 00:25:29.822 --rc geninfo_unexecuted_blocks=1 00:25:29.822 00:25:29.822 ' 00:25:29.822 10:39:05 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.822 --rc genhtml_branch_coverage=1 00:25:29.822 --rc genhtml_function_coverage=1 00:25:29.822 --rc genhtml_legend=1 00:25:29.822 --rc geninfo_all_blocks=1 00:25:29.822 --rc geninfo_unexecuted_blocks=1 00:25:29.822 00:25:29.822 ' 00:25:29.822 10:39:05 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.822 --rc genhtml_branch_coverage=1 00:25:29.822 --rc genhtml_function_coverage=1 00:25:29.822 --rc genhtml_legend=1 00:25:29.822 --rc geninfo_all_blocks=1 00:25:29.822 --rc geninfo_unexecuted_blocks=1 00:25:29.822 00:25:29.822 ' 00:25:29.822 10:39:05 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.822 --rc genhtml_branch_coverage=1 00:25:29.822 --rc genhtml_function_coverage=1 00:25:29.822 --rc genhtml_legend=1 00:25:29.822 --rc geninfo_all_blocks=1 00:25:29.822 --rc geninfo_unexecuted_blocks=1 00:25:29.822 00:25:29.822 ' 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.822 10:39:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.822 10:39:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 10:39:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 10:39:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 10:39:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:29.822 10:39:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.822 10:39:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:29.822 10:39:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:29.822 10:39:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.URHSLiknz2 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.URHSLiknz2 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.URHSLiknz2 00:25:30.081 10:39:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.URHSLiknz2 00:25:30.081 10:39:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aje5L1BMmO 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:30.081 10:39:05 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aje5L1BMmO 00:25:30.081 10:39:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aje5L1BMmO 00:25:30.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.081 10:39:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.aje5L1BMmO 00:25:30.081 10:39:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=100444 00:25:30.081 10:39:05 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.081 10:39:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100444 00:25:30.081 10:39:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100444 ']' 00:25:30.081 10:39:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.081 10:39:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.081 10:39:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.081 10:39:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.081 10:39:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:30.081 [2024-12-10 10:39:05.229908] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:30.081 [2024-12-10 10:39:05.230210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100444 ] 00:25:30.340 [2024-12-10 10:39:05.371738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.340 [2024-12-10 10:39:05.415291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.340 [2024-12-10 10:39:05.460161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:30.599 10:39:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:30.599 [2024-12-10 10:39:05.603072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.599 null0 00:25:30.599 [2024-12-10 10:39:05.635049] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.599 [2024-12-10 10:39:05.635386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.599 10:39:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:30.599 [2024-12-10 10:39:05.667040] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:30.599 request: 00:25:30.599 { 00:25:30.599 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.599 "secure_channel": false, 00:25:30.599 "listen_address": { 00:25:30.599 "trtype": "tcp", 00:25:30.599 "traddr": "127.0.0.1", 00:25:30.599 "trsvcid": "4420" 00:25:30.599 }, 00:25:30.599 "method": "nvmf_subsystem_add_listener", 00:25:30.599 "req_id": 1 00:25:30.599 } 00:25:30.599 Got JSON-RPC error response 00:25:30.599 response: 00:25:30.599 { 00:25:30.599 "code": -32602, 00:25:30.599 "message": "Invalid parameters" 00:25:30.599 } 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:30.599 10:39:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=100458 00:25:30.599 10:39:05 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:30.599 10:39:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 100458 /var/tmp/bperf.sock 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100458 ']' 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.599 10:39:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:30.599 [2024-12-10 10:39:05.736566] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:30.599 [2024-12-10 10:39:05.737259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100458 ] 00:25:30.858 [2024-12-10 10:39:05.872020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.858 [2024-12-10 10:39:05.905187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.858 [2024-12-10 10:39:05.932493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:30.858 10:39:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.858 10:39:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:30.858 10:39:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:30.858 10:39:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:31.117 10:39:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.aje5L1BMmO 00:25:31.117 10:39:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.aje5L1BMmO 00:25:31.375 10:39:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:31.376 10:39:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:31.376 10:39:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:31.376 10:39:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:31.376 10:39:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:31.634 10:39:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.URHSLiknz2 == \/\t\m\p\/\t\m\p\.\U\R\H\S\L\i\k\n\z\2 ]] 00:25:31.634 10:39:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:31.634 10:39:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:31.634 10:39:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:31.634 10:39:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:31.634 10:39:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:31.893 10:39:07 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.aje5L1BMmO == \/\t\m\p\/\t\m\p\.\a\j\e\5\L\1\B\M\m\O ]] 00:25:31.893 10:39:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:31.893 10:39:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:31.893 10:39:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:31.893 10:39:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:31.893 10:39:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:31.893 10:39:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:32.152 10:39:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:32.152 10:39:07 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:32.152 10:39:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:32.152 10:39:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:32.411 10:39:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:32.411 10:39:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:32.411 10:39:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:32.411 10:39:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:32.411 10:39:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:32.411 10:39:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:32.669 [2024-12-10 10:39:07.853732] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:32.928 nvme0n1 00:25:32.928 10:39:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:32.928 10:39:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:32.928 10:39:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:32.928 10:39:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:32.928 10:39:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:32.928 10:39:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:33.187 10:39:08 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:33.187 10:39:08 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:33.187 10:39:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:33.187 10:39:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:33.187 10:39:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:33.187 10:39:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:33.187 10:39:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:33.446 10:39:08 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:33.446 10:39:08 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:33.446 Running I/O for 1 seconds... 00:25:34.382 13995.00 IOPS, 54.67 MiB/s 00:25:34.382 Latency(us) 00:25:34.382 [2024-12-10T10:39:09.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.382 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:34.382 nvme0n1 : 1.01 14041.73 54.85 0.00 0.00 9094.41 4110.89 21090.68 00:25:34.382 [2024-12-10T10:39:09.609Z] =================================================================================================================== 00:25:34.382 [2024-12-10T10:39:09.609Z] Total : 14041.73 54.85 0.00 0.00 9094.41 4110.89 21090.68 00:25:34.382 { 00:25:34.382 "results": [ 00:25:34.382 { 00:25:34.382 "job": "nvme0n1", 00:25:34.382 "core_mask": "0x2", 00:25:34.382 "workload": "randrw", 00:25:34.382 "percentage": 50, 00:25:34.382 "status": "finished", 00:25:34.382 "queue_depth": 128, 00:25:34.382 "io_size": 4096, 00:25:34.382 "runtime": 1.005859, 00:25:34.382 "iops": 14041.729506819544, 00:25:34.382 "mibps": 54.85050588601384, 00:25:34.382 "io_failed": 0, 00:25:34.382 "io_timeout": 0, 00:25:34.382 "avg_latency_us": 9094.409721685846, 00:25:34.382 "min_latency_us": 4110.894545454546, 00:25:34.382 "max_latency_us": 21090.676363636365 00:25:34.382 } 00:25:34.382 ], 00:25:34.382 "core_count": 1 00:25:34.382 } 00:25:34.382 10:39:09 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:34.382 10:39:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:34.948 10:39:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:34.948 10:39:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:34.948 10:39:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:34.948 10:39:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.948 10:39:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.948 10:39:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:34.948 10:39:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:34.948 10:39:10 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:34.949 10:39:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:34.949 10:39:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:34.949 10:39:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.949 10:39:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.949 10:39:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:35.207 10:39:10 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:35.207 10:39:10 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.207 10:39:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:35.207 10:39:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:35.466 [2024-12-10 10:39:10.589327] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:35.466 [2024-12-10 10:39:10.589347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620b20 (107): Transport endpoint is not connected 00:25:35.466 [2024-12-10 10:39:10.590338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620b20 (9): Bad file descriptor 00:25:35.466 [2024-12-10 10:39:10.591335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.466 [2024-12-10 10:39:10.591358] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:35.466 [2024-12-10 10:39:10.591369] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:35.466 [2024-12-10 10:39:10.591379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.466 request: 00:25:35.466 { 00:25:35.466 "name": "nvme0", 00:25:35.466 "trtype": "tcp", 00:25:35.466 "traddr": "127.0.0.1", 00:25:35.466 "adrfam": "ipv4", 00:25:35.466 "trsvcid": "4420", 00:25:35.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:35.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:35.466 "prchk_reftag": false, 00:25:35.466 "prchk_guard": false, 00:25:35.466 "hdgst": false, 00:25:35.466 "ddgst": false, 00:25:35.466 "psk": "key1", 00:25:35.466 "allow_unrecognized_csi": false, 00:25:35.466 "method": "bdev_nvme_attach_controller", 00:25:35.466 "req_id": 1 00:25:35.466 } 00:25:35.466 Got JSON-RPC error response 00:25:35.466 response: 00:25:35.466 { 00:25:35.466 "code": -5, 00:25:35.466 "message": "Input/output error" 00:25:35.466 } 00:25:35.466 10:39:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:35.466 10:39:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.466 10:39:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.466 10:39:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.466 10:39:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:35.466 10:39:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:35.466 10:39:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:35.466 10:39:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:35.466 10:39:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.466 10:39:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:35.724 10:39:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:35.725 10:39:10 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:35.725 10:39:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:35.725 10:39:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:35.725 10:39:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:35.725 10:39:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:35.725 10:39:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.986 10:39:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:35.986 10:39:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:35.986 10:39:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:36.246 10:39:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:36.246 10:39:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:36.505 10:39:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:36.505 10:39:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:36.505 10:39:11 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:36.764 10:39:11 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:36.764 10:39:11 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.URHSLiknz2 00:25:36.764 10:39:11 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.764 10:39:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:36.764 10:39:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:37.023 [2024-12-10 10:39:12.108786] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.URHSLiknz2': 0100660 00:25:37.023 [2024-12-10 10:39:12.108824] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:37.023 request: 00:25:37.023 { 00:25:37.023 "name": "key0", 00:25:37.023 "path": "/tmp/tmp.URHSLiknz2", 00:25:37.023 "method": "keyring_file_add_key", 00:25:37.023 "req_id": 1 00:25:37.023 } 00:25:37.023 Got JSON-RPC error response 00:25:37.023 response: 00:25:37.023 { 00:25:37.023 "code": -1, 00:25:37.023 "message": "Operation not permitted" 00:25:37.023 } 00:25:37.023 10:39:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:37.023 10:39:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:37.023 10:39:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:37.023 10:39:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:37.023 10:39:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.URHSLiknz2 00:25:37.023 10:39:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:37.023 10:39:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.URHSLiknz2 00:25:37.282 10:39:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.URHSLiknz2 00:25:37.282 10:39:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:37.282 10:39:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:37.282 10:39:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:37.282 10:39:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:37.282 10:39:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:37.282 10:39:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:37.541 10:39:12 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:37.541 10:39:12 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.541 10:39:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:37.541 10:39:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:37.800 [2024-12-10 10:39:12.841026] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.URHSLiknz2': No such file or directory 00:25:37.800 [2024-12-10 10:39:12.841060] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:37.800 [2024-12-10 10:39:12.841095] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:37.800 [2024-12-10 10:39:12.841104] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:37.800 [2024-12-10 10:39:12.841112] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:37.800 [2024-12-10 10:39:12.841119] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:37.800 request: 00:25:37.800 { 00:25:37.800 "name": "nvme0", 00:25:37.800 "trtype": "tcp", 00:25:37.800 "traddr": "127.0.0.1", 00:25:37.800 "adrfam": "ipv4", 00:25:37.801 "trsvcid": "4420", 00:25:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.801 "prchk_reftag": false, 00:25:37.801 "prchk_guard": false, 00:25:37.801 "hdgst": false, 00:25:37.801 "ddgst": false, 00:25:37.801 "psk": "key0", 00:25:37.801 "allow_unrecognized_csi": false, 00:25:37.801 "method": "bdev_nvme_attach_controller", 00:25:37.801 "req_id": 1 00:25:37.801 } 00:25:37.801 Got JSON-RPC error response 00:25:37.801 response: 00:25:37.801 { 00:25:37.801 "code": -19, 00:25:37.801 "message": "No such device" 00:25:37.801 } 00:25:37.801 10:39:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:37.801 10:39:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:37.801 10:39:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:37.801 10:39:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:37.801 10:39:12 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:37.801 10:39:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:38.059 10:39:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fGPEiwDAn4 00:25:38.059 10:39:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:38.060 10:39:13 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:38.060 10:39:13 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:38.060 10:39:13 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:38.060 10:39:13 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:38.060 10:39:13 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:38.060 10:39:13 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:38.060 10:39:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fGPEiwDAn4 00:25:38.060 10:39:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fGPEiwDAn4 00:25:38.060 10:39:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.fGPEiwDAn4 00:25:38.060 10:39:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fGPEiwDAn4 00:25:38.060 10:39:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fGPEiwDAn4 00:25:38.318 10:39:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:38.318 10:39:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:38.577 nvme0n1 00:25:38.577 10:39:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:38.577 10:39:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:38.577 10:39:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:38.577 10:39:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:38.577 10:39:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:38.577 10:39:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:38.836 10:39:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:38.836 10:39:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:38.836 10:39:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:39.095 10:39:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:39.095 10:39:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:39.095 10:39:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:39.095 10:39:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:39.095 10:39:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:39.354 10:39:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:39.354 10:39:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:39.354 10:39:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:39.354 10:39:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:39.354 10:39:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:39.354 10:39:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:39.354 10:39:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:39.613 10:39:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:39.613 10:39:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:39.613 10:39:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:39.872 10:39:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:39.872 10:39:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:39.872 10:39:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:40.147 10:39:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:40.147 10:39:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fGPEiwDAn4 00:25:40.147 10:39:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fGPEiwDAn4 00:25:40.415 10:39:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.aje5L1BMmO 00:25:40.415 10:39:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.aje5L1BMmO 00:25:40.674 10:39:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:40.674 10:39:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:40.932 nvme0n1 00:25:40.932 10:39:16 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:40.932 10:39:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:41.192 10:39:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:41.192 "subsystems": [ 00:25:41.192 { 00:25:41.192 "subsystem": "keyring", 00:25:41.192 "config": [ 00:25:41.192 { 00:25:41.192 "method": "keyring_file_add_key", 00:25:41.192 "params": { 00:25:41.192 "name": "key0", 00:25:41.192 "path": "/tmp/tmp.fGPEiwDAn4" 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "keyring_file_add_key", 00:25:41.192 "params": { 00:25:41.192 "name": "key1", 00:25:41.192 "path": "/tmp/tmp.aje5L1BMmO" 00:25:41.192 } 00:25:41.192 } 00:25:41.192 ] 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "subsystem": "iobuf", 00:25:41.192 "config": [ 00:25:41.192 { 00:25:41.192 "method": "iobuf_set_options", 00:25:41.192 "params": { 00:25:41.192 "small_pool_count": 8192, 00:25:41.192 "large_pool_count": 1024, 00:25:41.192 "small_bufsize": 8192, 00:25:41.192 "large_bufsize": 135168 00:25:41.192 } 00:25:41.192 } 00:25:41.192 ] 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "subsystem": "sock", 00:25:41.192 "config": [ 00:25:41.192 { 00:25:41.192 "method": "sock_set_default_impl", 00:25:41.192 "params": { 00:25:41.192 "impl_name": "uring" 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "sock_impl_set_options", 00:25:41.192 "params": { 00:25:41.192 "impl_name": "ssl", 00:25:41.192 "recv_buf_size": 4096, 00:25:41.192 "send_buf_size": 4096, 00:25:41.192 "enable_recv_pipe": true, 00:25:41.192 "enable_quickack": false, 00:25:41.192 "enable_placement_id": 0, 00:25:41.192 "enable_zerocopy_send_server": true, 00:25:41.192 "enable_zerocopy_send_client": false, 00:25:41.192 "zerocopy_threshold": 0, 00:25:41.192 "tls_version": 0, 00:25:41.192 "enable_ktls": false 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "sock_impl_set_options", 00:25:41.192 "params": { 00:25:41.192 "impl_name": "posix", 00:25:41.192 "recv_buf_size": 2097152, 00:25:41.192 "send_buf_size": 2097152, 00:25:41.192 "enable_recv_pipe": true, 00:25:41.192 "enable_quickack": false, 00:25:41.192 "enable_placement_id": 0, 00:25:41.192 "enable_zerocopy_send_server": true, 00:25:41.192 "enable_zerocopy_send_client": false, 00:25:41.192 "zerocopy_threshold": 0, 00:25:41.192 "tls_version": 0, 00:25:41.192 "enable_ktls": false 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "sock_impl_set_options", 00:25:41.192 "params": { 00:25:41.192 "impl_name": "uring", 00:25:41.192 "recv_buf_size": 2097152, 00:25:41.192 "send_buf_size": 2097152, 00:25:41.192 "enable_recv_pipe": true, 00:25:41.192 "enable_quickack": false, 00:25:41.192 "enable_placement_id": 0, 00:25:41.192 "enable_zerocopy_send_server": false, 00:25:41.192 "enable_zerocopy_send_client": false, 00:25:41.192 "zerocopy_threshold": 0, 00:25:41.192 "tls_version": 0, 00:25:41.192 "enable_ktls": false 00:25:41.192 } 00:25:41.192 } 00:25:41.192 ] 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "subsystem": "vmd", 00:25:41.192 "config": [] 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "subsystem": "accel", 00:25:41.192 "config": [ 00:25:41.192 { 00:25:41.192 "method": "accel_set_options", 00:25:41.192 "params": { 00:25:41.192 "small_cache_size": 128, 00:25:41.192 "large_cache_size": 16, 00:25:41.192 "task_count": 2048, 00:25:41.192 "sequence_count": 2048, 00:25:41.192 "buf_count": 2048 00:25:41.192 } 00:25:41.192 } 00:25:41.192 ] 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "subsystem": "bdev", 00:25:41.192 "config": [ 00:25:41.192 { 00:25:41.192 "method": "bdev_set_options", 00:25:41.192 "params": { 00:25:41.192 "bdev_io_pool_size": 65535, 00:25:41.192 "bdev_io_cache_size": 256, 00:25:41.192 "bdev_auto_examine": true, 00:25:41.192 "iobuf_small_cache_size": 128, 00:25:41.192 "iobuf_large_cache_size": 16 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "bdev_raid_set_options", 00:25:41.192 "params": { 00:25:41.192 "process_window_size_kb": 1024, 00:25:41.192 "process_max_bandwidth_mb_sec": 0 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "bdev_iscsi_set_options", 00:25:41.192 "params": { 00:25:41.192 "timeout_sec": 30 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "bdev_nvme_set_options", 00:25:41.192 "params": { 00:25:41.192 "action_on_timeout": "none", 00:25:41.192 "timeout_us": 0, 00:25:41.192 "timeout_admin_us": 0, 00:25:41.192 "keep_alive_timeout_ms": 10000, 00:25:41.192 "arbitration_burst": 0, 00:25:41.192 "low_priority_weight": 0, 00:25:41.192 "medium_priority_weight": 0, 00:25:41.192 "high_priority_weight": 0, 00:25:41.192 "nvme_adminq_poll_period_us": 10000, 00:25:41.192 "nvme_ioq_poll_period_us": 0, 00:25:41.192 "io_queue_requests": 512, 00:25:41.192 "delay_cmd_submit": true, 00:25:41.192 "transport_retry_count": 4, 00:25:41.192 "bdev_retry_count": 3, 00:25:41.192 "transport_ack_timeout": 0, 00:25:41.192 "ctrlr_loss_timeout_sec": 0, 00:25:41.192 "reconnect_delay_sec": 0, 00:25:41.192 "fast_io_fail_timeout_sec": 0, 00:25:41.192 "disable_auto_failback": false, 00:25:41.192 "generate_uuids": false, 00:25:41.192 "transport_tos": 0, 00:25:41.192 "nvme_error_stat": false, 00:25:41.192 "rdma_srq_size": 0, 00:25:41.192 "io_path_stat": false, 00:25:41.192 "allow_accel_sequence": false, 00:25:41.192 "rdma_max_cq_size": 0, 00:25:41.192 "rdma_cm_event_timeout_ms": 0, 00:25:41.192 "dhchap_digests": [ 00:25:41.192 "sha256", 00:25:41.192 "sha384", 00:25:41.192 "sha512" 00:25:41.192 ], 00:25:41.192 "dhchap_dhgroups": [ 00:25:41.192 "null", 00:25:41.192 "ffdhe2048", 00:25:41.192 "ffdhe3072", 00:25:41.192 "ffdhe4096", 00:25:41.192 "ffdhe6144", 00:25:41.192 "ffdhe8192" 00:25:41.192 ] 00:25:41.192 } 00:25:41.192 }, 00:25:41.192 { 00:25:41.192 "method": "bdev_nvme_attach_controller", 00:25:41.192 "params": { 00:25:41.192 "name": "nvme0", 00:25:41.192 "trtype": "TCP", 00:25:41.192 "adrfam": "IPv4", 00:25:41.192 "traddr": "127.0.0.1", 00:25:41.192 "trsvcid": "4420", 00:25:41.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:41.192 "prchk_reftag": false, 00:25:41.193 "prchk_guard": false, 00:25:41.193 "ctrlr_loss_timeout_sec": 0, 00:25:41.193 "reconnect_delay_sec": 0, 00:25:41.193 "fast_io_fail_timeout_sec": 0, 00:25:41.193 "psk": "key0", 00:25:41.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:41.193 "hdgst": false, 00:25:41.193 "ddgst": false 00:25:41.193 } 00:25:41.193 }, 00:25:41.193 { 00:25:41.193 "method": "bdev_nvme_set_hotplug", 00:25:41.193 "params": { 00:25:41.193 "period_us": 100000, 00:25:41.193 "enable": false 00:25:41.193 } 00:25:41.193 }, 00:25:41.193 { 00:25:41.193 "method": "bdev_wait_for_examine" 00:25:41.193 } 00:25:41.193 ] 00:25:41.193 }, 00:25:41.193 { 00:25:41.193 "subsystem": "nbd", 00:25:41.193 "config": [] 00:25:41.193 } 00:25:41.193 ] 00:25:41.193 }' 00:25:41.193 10:39:16 keyring_file -- keyring/file.sh@115 -- # killprocess 100458 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100458 ']' 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100458 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100458 00:25:41.193 killing process with pid 100458 00:25:41.193 Received shutdown signal, test time was about 1.000000 seconds 00:25:41.193 00:25:41.193 Latency(us) 00:25:41.193 [2024-12-10T10:39:16.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.193 [2024-12-10T10:39:16.420Z] =================================================================================================================== 00:25:41.193 [2024-12-10T10:39:16.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100458' 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@969 -- # kill 100458 00:25:41.193 10:39:16 keyring_file -- common/autotest_common.sh@974 -- # wait 100458 00:25:41.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.452 10:39:16 keyring_file -- keyring/file.sh@118 -- # bperfpid=100691 00:25:41.452 10:39:16 keyring_file -- keyring/file.sh@120 -- # waitforlisten 100691 /var/tmp/bperf.sock 00:25:41.452 10:39:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100691 ']' 00:25:41.452 10:39:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.452 10:39:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:41.452 10:39:16 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:41.452 10:39:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.452 10:39:16 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:41.452 "subsystems": [ 00:25:41.452 { 00:25:41.452 "subsystem": "keyring", 00:25:41.452 "config": [ 00:25:41.452 { 00:25:41.452 "method": "keyring_file_add_key", 00:25:41.452 "params": { 00:25:41.452 "name": "key0", 00:25:41.452 "path": "/tmp/tmp.fGPEiwDAn4" 00:25:41.452 } 00:25:41.452 }, 00:25:41.452 { 00:25:41.452 "method": "keyring_file_add_key", 00:25:41.452 "params": { 00:25:41.452 "name": "key1", 00:25:41.452 "path": "/tmp/tmp.aje5L1BMmO" 00:25:41.452 } 00:25:41.452 } 00:25:41.452 ] 00:25:41.452 }, 00:25:41.452 { 00:25:41.452 "subsystem": "iobuf", 00:25:41.452 "config": [ 00:25:41.452 { 00:25:41.452 "method": "iobuf_set_options", 00:25:41.452 "params": { 00:25:41.452 "small_pool_count": 8192, 00:25:41.453 "large_pool_count": 1024, 00:25:41.453 "small_bufsize": 8192, 00:25:41.453 "large_bufsize": 135168 00:25:41.453 } 00:25:41.453 } 00:25:41.453 ] 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "subsystem": "sock", 00:25:41.453 "config": [ 00:25:41.453 { 00:25:41.453 "method": "sock_set_default_impl", 00:25:41.453 "params": { 00:25:41.453 "impl_name": "uring" 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "sock_impl_set_options", 00:25:41.453 "params": { 00:25:41.453 "impl_name": "ssl", 00:25:41.453 "recv_buf_size": 4096, 00:25:41.453 "send_buf_size": 4096, 00:25:41.453 "enable_recv_pipe": true, 00:25:41.453 "enable_quickack": false, 00:25:41.453 "enable_placement_id": 0, 00:25:41.453 "enable_zerocopy_send_server": true, 00:25:41.453 "enable_zerocopy_send_client": false, 00:25:41.453 "zerocopy_threshold": 0, 00:25:41.453 "tls_version": 0, 00:25:41.453 "enable_ktls": false 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "sock_impl_set_options", 00:25:41.453 "params": { 00:25:41.453 "impl_name": "posix", 00:25:41.453 "recv_buf_size": 2097152, 00:25:41.453 "send_buf_size": 2097152, 00:25:41.453 "enable_recv_pipe": true, 00:25:41.453 "enable_quickack": false, 00:25:41.453 "enable_placement_id": 0, 00:25:41.453 "enable_zerocopy_send_server": true, 00:25:41.453 "enable_zerocopy_send_client": false, 00:25:41.453 "zerocopy_threshold": 0, 00:25:41.453 "tls_version": 0, 00:25:41.453 "enable_ktls": false 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "sock_impl_set_options", 00:25:41.453 "params": { 00:25:41.453 "impl_name": "uring", 00:25:41.453 "recv_buf_size": 2097152, 00:25:41.453 "send_buf_size": 2097152, 00:25:41.453 "enable_recv_pipe": true, 00:25:41.453 "enable_quickack": false, 00:25:41.453 "enable_placement_id": 0, 00:25:41.453 "enable_zerocopy_send_server": false, 00:25:41.453 "enable_zerocopy_send_client": false, 00:25:41.453 "zerocopy_threshold": 0, 00:25:41.453 "tls_version": 0, 00:25:41.453 "enable_ktls": false 00:25:41.453 } 00:25:41.453 } 00:25:41.453 ] 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "subsystem": "vmd", 00:25:41.453 "config": [] 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "subsystem": "accel", 00:25:41.453 "config": [ 00:25:41.453 { 00:25:41.453 "method": "accel_set_options", 00:25:41.453 "params": { 00:25:41.453 "small_cache_size": 128, 00:25:41.453 "large_cache_size": 16, 00:25:41.453 "task_count": 2048, 00:25:41.453 "sequence_count": 2048, 00:25:41.453 "buf_count": 2048 00:25:41.453 } 00:25:41.453 } 00:25:41.453 ] 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "subsystem": "bdev", 00:25:41.453 "config": [ 00:25:41.453 { 00:25:41.453 "method": "bdev_set_options", 00:25:41.453 "params": { 00:25:41.453 "bdev_io_pool_size": 65535, 00:25:41.453 "bdev_io_cache_size": 256, 00:25:41.453 "bdev_auto_examine": true, 00:25:41.453 "iobuf_small_cache_size": 128, 00:25:41.453 "iobuf_large_cache_size": 16 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "bdev_raid_set_options", 00:25:41.453 "params": { 00:25:41.453 "process_window_size_kb": 1024, 00:25:41.453 "process_max_bandwidth_mb_sec": 0 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "bdev_iscsi_set_options", 00:25:41.453 "params": { 00:25:41.453 "timeout_sec": 30 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "bdev_nvme_set_options", 00:25:41.453 "params": { 00:25:41.453 "action_on_timeout": "none", 00:25:41.453 "timeout_us": 0, 00:25:41.453 "timeout_admin_us": 0, 00:25:41.453 "keep_alive_timeout_ms": 10000, 00:25:41.453 "arbitration_burst": 0, 00:25:41.453 "low_priority_weight": 0, 00:25:41.453 "medium_priority_weight": 0, 00:25:41.453 "high_priority_weight": 0, 00:25:41.453 "nvme_adminq_poll_period_us": 10000, 00:25:41.453 "nvme_ioq_poll_period_us": 0, 00:25:41.453 10:39:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:41.453 10:39:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:41.453 "io_queue_requests": 512, 00:25:41.453 "delay_cmd_submit": true, 00:25:41.453 "transport_retry_count": 4, 00:25:41.453 "bdev_retry_count": 3, 00:25:41.453 "transport_ack_timeout": 0, 00:25:41.453 "ctrlr_loss_timeout_sec": 0, 00:25:41.453 "reconnect_delay_sec": 0, 00:25:41.453 "fast_io_fail_timeout_sec": 0, 00:25:41.453 "disable_auto_failback": false, 00:25:41.453 "generate_uuids": false, 00:25:41.453 "transport_tos": 0, 00:25:41.453 "nvme_error_stat": false, 00:25:41.453 "rdma_srq_size": 0, 00:25:41.453 "io_path_stat": false, 00:25:41.453 "allow_accel_sequence": false, 00:25:41.453 "rdma_max_cq_size": 0, 00:25:41.453 "rdma_cm_event_timeout_ms": 0, 00:25:41.453 "dhchap_digests": [ 00:25:41.453 "sha256", 00:25:41.453 "sha384", 00:25:41.453 "sha512" 00:25:41.453 ], 00:25:41.453 "dhchap_dhgroups": [ 00:25:41.453 "null", 00:25:41.453 "ffdhe2048", 00:25:41.453 "ffdhe3072", 00:25:41.453 "ffdhe4096", 00:25:41.453 "ffdhe6144", 00:25:41.453 "ffdhe8192" 00:25:41.453 ] 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "bdev_nvme_attach_controller", 00:25:41.453 "params": { 00:25:41.453 "name": "nvme0", 00:25:41.453 "trtype": "TCP", 00:25:41.453 "adrfam": "IPv4", 00:25:41.453 "traddr": "127.0.0.1", 00:25:41.453 "trsvcid": "4420", 00:25:41.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:41.453 "prchk_reftag": false, 00:25:41.453 "prchk_guard": false, 00:25:41.453 "ctrlr_loss_timeout_sec": 0, 00:25:41.453 "reconnect_delay_sec": 0, 00:25:41.453 "fast_io_fail_timeout_sec": 0, 00:25:41.453 "psk": "key0", 00:25:41.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:41.453 "hdgst": false, 00:25:41.453 "ddgst": false 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "bdev_nvme_set_hotplug", 00:25:41.453 "params": { 00:25:41.453 "period_us": 100000, 00:25:41.453 "enable": false 00:25:41.453 } 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "method": "bdev_wait_for_examine" 00:25:41.453 } 00:25:41.453 ] 00:25:41.453 }, 00:25:41.453 { 00:25:41.453 "subsystem": "nbd", 00:25:41.453 "config": [] 00:25:41.453 } 00:25:41.453 ] 00:25:41.453 }' 00:25:41.453 [2024-12-10 10:39:16.575105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:41.453 [2024-12-10 10:39:16.575196] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100691 ] 00:25:41.713 [2024-12-10 10:39:16.702604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.713 [2024-12-10 10:39:16.734220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.713 [2024-12-10 10:39:16.841541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:41.713 [2024-12-10 10:39:16.876964] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:42.650 10:39:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.650 10:39:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:42.650 10:39:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:42.650 10:39:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:42.650 10:39:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:42.650 10:39:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:42.650 10:39:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:42.650 10:39:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:42.650 10:39:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:42.650 10:39:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:42.650 10:39:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:42.650 10:39:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:42.909 10:39:18 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:42.909 10:39:18 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:42.909 10:39:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:42.909 10:39:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:42.909 10:39:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:42.909 10:39:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:42.909 10:39:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:43.168 10:39:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:43.168 10:39:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:43.168 10:39:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:43.168 10:39:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:43.427 10:39:18 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:43.427 10:39:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:43.427 10:39:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fGPEiwDAn4 /tmp/tmp.aje5L1BMmO 00:25:43.427 10:39:18 keyring_file -- keyring/file.sh@20 -- # killprocess 100691 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100691 ']' 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100691 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100691 00:25:43.427 killing process with pid 100691 00:25:43.427 Received shutdown signal, test time was about 1.000000 seconds 00:25:43.427 00:25:43.427 Latency(us) 00:25:43.427 [2024-12-10T10:39:18.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.427 [2024-12-10T10:39:18.654Z] =================================================================================================================== 00:25:43.427 [2024-12-10T10:39:18.654Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100691' 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@969 -- # kill 100691 00:25:43.427 10:39:18 keyring_file -- common/autotest_common.sh@974 -- # wait 100691 00:25:43.686 10:39:18 keyring_file -- keyring/file.sh@21 -- # killprocess 100444 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100444 ']' 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100444 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100444 00:25:43.686 killing process with pid 100444 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:43.686 10:39:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100444' 00:25:43.687 10:39:18 keyring_file -- common/autotest_common.sh@969 -- # kill 100444 00:25:43.687 10:39:18 keyring_file -- common/autotest_common.sh@974 -- # wait 100444 00:25:43.945 ************************************ 00:25:43.945 END TEST keyring_file 00:25:43.945 ************************************ 00:25:43.945 00:25:43.945 real 0m14.188s 00:25:43.945 user 0m36.832s 00:25:43.945 sys 0m2.615s 00:25:43.945 10:39:19 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.945 10:39:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:43.945 10:39:19 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:43.945 10:39:19 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:43.945 10:39:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:43.945 10:39:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:43.945 10:39:19 -- common/autotest_common.sh@10 -- # set +x 00:25:43.945 ************************************ 00:25:43.945 START TEST keyring_linux 00:25:43.945 ************************************ 00:25:43.945 10:39:19 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:43.945 Joined session keyring: 668472005 00:25:43.945 * Looking for test storage... 00:25:43.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:43.945 10:39:19 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:43.945 10:39:19 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:43.945 10:39:19 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:44.205 10:39:19 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.205 10:39:19 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:44.205 10:39:19 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.205 10:39:19 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.205 --rc genhtml_branch_coverage=1 00:25:44.205 --rc genhtml_function_coverage=1 00:25:44.205 --rc genhtml_legend=1 00:25:44.205 --rc geninfo_all_blocks=1 00:25:44.205 --rc geninfo_unexecuted_blocks=1 00:25:44.205 00:25:44.205 ' 00:25:44.205 10:39:19 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.205 --rc genhtml_branch_coverage=1 00:25:44.205 --rc genhtml_function_coverage=1 00:25:44.205 --rc genhtml_legend=1 00:25:44.206 --rc geninfo_all_blocks=1 00:25:44.206 --rc geninfo_unexecuted_blocks=1 00:25:44.206 00:25:44.206 ' 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.206 --rc genhtml_branch_coverage=1 00:25:44.206 --rc genhtml_function_coverage=1 00:25:44.206 --rc genhtml_legend=1 00:25:44.206 --rc geninfo_all_blocks=1 00:25:44.206 --rc geninfo_unexecuted_blocks=1 00:25:44.206 00:25:44.206 ' 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.206 --rc genhtml_branch_coverage=1 00:25:44.206 --rc genhtml_function_coverage=1 00:25:44.206 --rc genhtml_legend=1 00:25:44.206 --rc geninfo_all_blocks=1 00:25:44.206 --rc geninfo_unexecuted_blocks=1 00:25:44.206 00:25:44.206 ' 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:495b1d55-bad1-4013-8ca4-4675b1022b7a 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=495b1d55-bad1-4013-8ca4-4675b1022b7a 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:44.206 10:39:19 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.206 10:39:19 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.206 10:39:19 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.206 10:39:19 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.206 10:39:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.206 10:39:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.206 10:39:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.206 10:39:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:44.206 10:39:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.206 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:44.206 /tmp/:spdk-test:key0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:44.206 10:39:19 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:44.206 /tmp/:spdk-test:key1 00:25:44.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.206 10:39:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100813 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.206 10:39:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100813 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100813 ']' 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:44.206 10:39:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:44.206 [2024-12-10 10:39:19.425609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:44.206 [2024-12-10 10:39:19.425940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100813 ] 00:25:44.466 [2024-12-10 10:39:19.564118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.466 [2024-12-10 10:39:19.598950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.466 [2024-12-10 10:39:19.633316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:44.725 10:39:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:44.725 [2024-12-10 10:39:19.755571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.725 null0 00:25:44.725 [2024-12-10 10:39:19.787543] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:44.725 [2024-12-10 10:39:19.787921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.725 10:39:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:44.725 1059638193 00:25:44.725 10:39:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:44.725 175912126 00:25:44.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:44.725 10:39:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100823 00:25:44.725 10:39:19 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:44.725 10:39:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100823 /var/tmp/bperf.sock 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100823 ']' 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:44.725 10:39:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:44.725 [2024-12-10 10:39:19.872562] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:44.725 [2024-12-10 10:39:19.872860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100823 ] 00:25:44.984 [2024-12-10 10:39:20.008375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.984 [2024-12-10 10:39:20.042941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.920 10:39:20 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:45.920 10:39:20 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:45.920 10:39:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:45.920 10:39:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:45.920 10:39:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:45.920 10:39:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:46.179 [2024-12-10 10:39:21.280798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:46.179 10:39:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:46.179 10:39:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:46.438 [2024-12-10 10:39:21.519518] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.438 nvme0n1 00:25:46.438 10:39:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:46.438 10:39:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:46.438 10:39:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:46.439 10:39:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:46.439 10:39:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.439 10:39:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:46.697 10:39:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:46.697 10:39:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:46.697 10:39:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:46.697 10:39:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:46.697 10:39:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.697 10:39:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:46.697 10:39:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@25 -- # sn=1059638193 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 1059638193 == \1\0\5\9\6\3\8\1\9\3 ]] 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1059638193 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:46.956 10:39:22 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.215 Running I/O for 1 seconds... 00:25:48.152 15720.00 IOPS, 61.41 MiB/s 00:25:48.152 Latency(us) 00:25:48.152 [2024-12-10T10:39:23.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.152 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:48.152 nvme0n1 : 1.01 15720.66 61.41 0.00 0.00 8105.90 6791.91 15073.28 00:25:48.152 [2024-12-10T10:39:23.379Z] =================================================================================================================== 00:25:48.152 [2024-12-10T10:39:23.379Z] Total : 15720.66 61.41 0.00 0.00 8105.90 6791.91 15073.28 00:25:48.152 { 00:25:48.152 "results": [ 00:25:48.152 { 00:25:48.152 "job": "nvme0n1", 00:25:48.152 "core_mask": "0x2", 00:25:48.152 "workload": "randread", 00:25:48.152 "status": "finished", 00:25:48.152 "queue_depth": 128, 00:25:48.152 "io_size": 4096, 00:25:48.152 "runtime": 1.008164, 00:25:48.152 "iops": 15720.656559845422, 00:25:48.152 "mibps": 61.40881468689618, 00:25:48.152 "io_failed": 0, 00:25:48.152 "io_timeout": 0, 00:25:48.152 "avg_latency_us": 8105.898456684965, 00:25:48.152 "min_latency_us": 6791.912727272727, 00:25:48.152 "max_latency_us": 15073.28 00:25:48.152 } 00:25:48.152 ], 00:25:48.152 "core_count": 1 00:25:48.152 } 00:25:48.152 10:39:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:48.152 10:39:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:48.411 10:39:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:48.411 10:39:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:48.411 10:39:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:48.411 10:39:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:48.411 10:39:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:48.411 10:39:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:48.670 10:39:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:48.670 10:39:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:48.670 10:39:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:48.670 10:39:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:48.670 10:39:23 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:48.670 10:39:23 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:48.671 10:39:23 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:48.671 10:39:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.671 10:39:23 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:48.671 10:39:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.671 10:39:23 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:48.671 10:39:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:48.930 [2024-12-10 10:39:24.120585] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:48.930 [2024-12-10 10:39:24.121330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e87a0 (107): Transport endpoint is not connected 00:25:48.930 [2024-12-10 10:39:24.122321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e87a0 (9): Bad file descriptor 00:25:48.930 [2024-12-10 10:39:24.123318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:48.930 [2024-12-10 10:39:24.123338] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:48.930 [2024-12-10 10:39:24.123360] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:48.930 [2024-12-10 10:39:24.123369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:48.930 request: 00:25:48.930 { 00:25:48.930 "name": "nvme0", 00:25:48.930 "trtype": "tcp", 00:25:48.930 "traddr": "127.0.0.1", 00:25:48.930 "adrfam": "ipv4", 00:25:48.930 "trsvcid": "4420", 00:25:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.930 "prchk_reftag": false, 00:25:48.930 "prchk_guard": false, 00:25:48.930 "hdgst": false, 00:25:48.930 "ddgst": false, 00:25:48.930 "psk": ":spdk-test:key1", 00:25:48.930 "allow_unrecognized_csi": false, 00:25:48.930 "method": "bdev_nvme_attach_controller", 00:25:48.930 "req_id": 1 00:25:48.930 } 00:25:48.930 Got JSON-RPC error response 00:25:48.930 response: 00:25:48.930 { 00:25:48.930 "code": -5, 00:25:48.930 "message": "Input/output error" 00:25:48.930 } 00:25:48.930 10:39:24 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:48.930 10:39:24 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:48.930 10:39:24 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:48.930 10:39:24 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@33 -- # sn=1059638193 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1059638193 00:25:48.930 1 links removed 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:48.930 10:39:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:49.189 10:39:24 keyring_linux -- keyring/linux.sh@33 -- # sn=175912126 00:25:49.189 10:39:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 175912126 00:25:49.189 1 links removed 00:25:49.189 10:39:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100823 00:25:49.189 10:39:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100823 ']' 00:25:49.189 10:39:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100823 00:25:49.189 10:39:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:49.189 10:39:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.189 10:39:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100823 00:25:49.189 killing process with pid 100823 00:25:49.189 Received shutdown signal, test time was about 1.000000 seconds 00:25:49.189 00:25:49.189 Latency(us) 00:25:49.189 [2024-12-10T10:39:24.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.189 [2024-12-10T10:39:24.416Z] =================================================================================================================== 00:25:49.190 [2024-12-10T10:39:24.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100823' 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 100823 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 100823 00:25:49.190 10:39:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100813 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100813 ']' 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100813 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100813 00:25:49.190 killing process with pid 100813 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100813' 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 100813 00:25:49.190 10:39:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 100813 00:25:49.449 00:25:49.449 real 0m5.495s 00:25:49.449 user 0m11.337s 00:25:49.449 sys 0m1.380s 00:25:49.449 ************************************ 00:25:49.449 END TEST keyring_linux 00:25:49.449 ************************************ 00:25:49.449 10:39:24 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.449 10:39:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:49.449 10:39:24 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:49.449 10:39:24 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:49.449 10:39:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:49.449 10:39:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:49.449 10:39:24 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:49.449 10:39:24 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:49.449 10:39:24 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:49.449 10:39:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.449 10:39:24 -- common/autotest_common.sh@10 -- # set +x 00:25:49.449 10:39:24 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:49.449 10:39:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:49.449 10:39:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:49.449 10:39:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.355 INFO: APP EXITING 00:25:51.355 INFO: killing all VMs 00:25:51.355 INFO: killing vhost app 00:25:51.355 INFO: EXIT DONE 00:25:51.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.922 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:51.922 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:52.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:52.858 Cleaning 00:25:52.858 Removing: /var/run/dpdk/spdk0/config 00:25:52.858 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:52.858 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:52.858 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:52.858 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:52.858 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:52.858 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:52.858 Removing: /var/run/dpdk/spdk1/config 00:25:52.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:52.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:52.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:52.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:52.858 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:52.858 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:52.858 Removing: /var/run/dpdk/spdk2/config 00:25:52.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:52.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:52.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:52.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:52.858 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:52.858 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:52.858 Removing: /var/run/dpdk/spdk3/config 00:25:52.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:52.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:52.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:52.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:52.858 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:52.858 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:52.858 Removing: /var/run/dpdk/spdk4/config 00:25:52.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:52.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:52.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:52.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:52.858 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:52.858 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:52.858 Removing: /dev/shm/nvmf_trace.0 00:25:52.858 Removing: /dev/shm/spdk_tgt_trace.pid69873 00:25:52.858 Removing: /var/run/dpdk/spdk0 00:25:52.858 Removing: /var/run/dpdk/spdk1 00:25:52.858 Removing: /var/run/dpdk/spdk2 00:25:52.858 Removing: /var/run/dpdk/spdk3 00:25:52.858 Removing: /var/run/dpdk/spdk4 00:25:52.858 Removing: /var/run/dpdk/spdk_pid100444 00:25:52.858 Removing: /var/run/dpdk/spdk_pid100458 00:25:52.858 Removing: /var/run/dpdk/spdk_pid100691 00:25:52.858 Removing: /var/run/dpdk/spdk_pid100813 00:25:52.858 Removing: /var/run/dpdk/spdk_pid100823 00:25:52.858 Removing: /var/run/dpdk/spdk_pid69731 00:25:52.858 Removing: /var/run/dpdk/spdk_pid69873 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70072 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70158 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70173 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70282 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70293 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70427 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70622 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70776 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70849 00:25:52.858 Removing: /var/run/dpdk/spdk_pid70925 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71019 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71091 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71129 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71165 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71229 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71326 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71769 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71823 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71862 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71865 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71932 00:25:52.858 Removing: /var/run/dpdk/spdk_pid71935 00:25:52.858 Removing: /var/run/dpdk/spdk_pid72002 00:25:52.858 Removing: /var/run/dpdk/spdk_pid72005 00:25:52.858 Removing: /var/run/dpdk/spdk_pid72056 00:25:52.858 Removing: /var/run/dpdk/spdk_pid72061 00:25:52.858 Removing: /var/run/dpdk/spdk_pid72107 00:25:52.858 Removing: /var/run/dpdk/spdk_pid72125 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72255 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72285 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72368 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72694 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72706 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72743 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72756 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72772 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72790 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72799 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72814 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72833 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72847 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72862 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72881 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72895 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72910 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72929 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72943 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72953 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72972 00:25:53.117 Removing: /var/run/dpdk/spdk_pid72990 00:25:53.117 Removing: /var/run/dpdk/spdk_pid73001 00:25:53.117 Removing: /var/run/dpdk/spdk_pid73037 00:25:53.117 Removing: /var/run/dpdk/spdk_pid73045 00:25:53.117 Removing: /var/run/dpdk/spdk_pid73077 00:25:53.117 Removing: /var/run/dpdk/spdk_pid73141 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73175 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73179 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73207 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73217 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73219 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73267 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73275 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73309 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73313 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73321 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73332 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73336 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73345 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73355 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73359 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73393 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73414 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73429 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73452 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73456 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73469 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73504 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73521 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73542 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73555 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73557 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73559 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73572 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73574 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73583 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73589 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73670 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73708 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73822 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73850 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73895 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73910 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73926 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73946 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73977 00:25:53.118 Removing: /var/run/dpdk/spdk_pid73993 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74070 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74082 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74126 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74191 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74247 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74272 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74370 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74407 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74445 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74670 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74758 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74792 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74816 00:25:53.118 Removing: /var/run/dpdk/spdk_pid74849 00:25:53.377 Removing: /var/run/dpdk/spdk_pid74883 00:25:53.377 Removing: /var/run/dpdk/spdk_pid74915 00:25:53.377 Removing: /var/run/dpdk/spdk_pid74948 00:25:53.377 Removing: /var/run/dpdk/spdk_pid75334 00:25:53.377 Removing: /var/run/dpdk/spdk_pid75374 00:25:53.377 Removing: /var/run/dpdk/spdk_pid75709 00:25:53.377 Removing: /var/run/dpdk/spdk_pid76170 00:25:53.377 Removing: /var/run/dpdk/spdk_pid76432 00:25:53.377 Removing: /var/run/dpdk/spdk_pid77260 00:25:53.377 Removing: /var/run/dpdk/spdk_pid78168 00:25:53.377 Removing: /var/run/dpdk/spdk_pid78291 00:25:53.377 Removing: /var/run/dpdk/spdk_pid78353 00:25:53.377 Removing: /var/run/dpdk/spdk_pid79748 00:25:53.377 Removing: /var/run/dpdk/spdk_pid80057 00:25:53.377 Removing: /var/run/dpdk/spdk_pid83783 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84141 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84250 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84377 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84398 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84419 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84440 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84525 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84653 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84789 00:25:53.377 Removing: /var/run/dpdk/spdk_pid84865 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85069 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85152 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85226 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85576 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85988 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85989 00:25:53.377 Removing: /var/run/dpdk/spdk_pid85990 00:25:53.377 Removing: /var/run/dpdk/spdk_pid86241 00:25:53.377 Removing: /var/run/dpdk/spdk_pid86485 00:25:53.377 Removing: /var/run/dpdk/spdk_pid86493 00:25:53.377 Removing: /var/run/dpdk/spdk_pid88865 00:25:53.377 Removing: /var/run/dpdk/spdk_pid88867 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89194 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89208 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89228 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89257 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89263 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89352 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89355 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89463 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89465 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89573 00:25:53.377 Removing: /var/run/dpdk/spdk_pid89581 00:25:53.377 Removing: /var/run/dpdk/spdk_pid90025 00:25:53.377 Removing: /var/run/dpdk/spdk_pid90068 00:25:53.377 Removing: /var/run/dpdk/spdk_pid90171 00:25:53.377 Removing: /var/run/dpdk/spdk_pid90256 00:25:53.377 Removing: /var/run/dpdk/spdk_pid90603 00:25:53.377 Removing: /var/run/dpdk/spdk_pid90792 00:25:53.377 Removing: /var/run/dpdk/spdk_pid91206 00:25:53.377 Removing: /var/run/dpdk/spdk_pid91744 00:25:53.377 Removing: /var/run/dpdk/spdk_pid92595 00:25:53.377 Removing: /var/run/dpdk/spdk_pid93240 00:25:53.377 Removing: /var/run/dpdk/spdk_pid93242 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95259 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95303 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95355 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95403 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95519 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95572 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95614 00:25:53.377 Removing: /var/run/dpdk/spdk_pid95668 00:25:53.377 Removing: /var/run/dpdk/spdk_pid96023 00:25:53.377 Removing: /var/run/dpdk/spdk_pid97234 00:25:53.377 Removing: /var/run/dpdk/spdk_pid97367 00:25:53.377 Removing: /var/run/dpdk/spdk_pid97611 00:25:53.377 Removing: /var/run/dpdk/spdk_pid98212 00:25:53.377 Removing: /var/run/dpdk/spdk_pid98371 00:25:53.377 Removing: /var/run/dpdk/spdk_pid98528 00:25:53.377 Removing: /var/run/dpdk/spdk_pid98621 00:25:53.377 Removing: /var/run/dpdk/spdk_pid98789 00:25:53.377 Removing: /var/run/dpdk/spdk_pid98892 00:25:53.377 Removing: /var/run/dpdk/spdk_pid99593 00:25:53.377 Removing: /var/run/dpdk/spdk_pid99623 00:25:53.377 Removing: /var/run/dpdk/spdk_pid99658 00:25:53.377 Removing: /var/run/dpdk/spdk_pid99912 00:25:53.377 Removing: /var/run/dpdk/spdk_pid99943 00:25:53.377 Removing: /var/run/dpdk/spdk_pid99977 00:25:53.377 Clean 00:25:53.636 10:39:28 -- common/autotest_common.sh@1451 -- # return 0 00:25:53.636 10:39:28 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:53.636 10:39:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.636 10:39:28 -- common/autotest_common.sh@10 -- # set +x 00:25:53.636 10:39:28 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:53.636 10:39:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.636 10:39:28 -- common/autotest_common.sh@10 -- # set +x 00:25:53.636 10:39:28 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:53.636 10:39:28 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:53.636 10:39:28 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:53.636 10:39:28 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:53.636 10:39:28 -- spdk/autotest.sh@394 -- # hostname 00:25:53.636 10:39:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:53.895 geninfo: WARNING: invalid characters removed from testname! 00:26:20.442 10:39:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:20.442 10:39:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:22.348 10:39:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:24.913 10:39:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:27.446 10:40:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:29.978 10:40:04 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:32.512 10:40:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:32.512 10:40:07 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:26:32.512 10:40:07 -- common/autotest_common.sh@1681 -- $ lcov --version 00:26:32.512 10:40:07 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:26:32.512 10:40:07 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:26:32.512 10:40:07 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:26:32.512 10:40:07 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:26:32.512 10:40:07 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:26:32.512 10:40:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:26:32.512 10:40:07 -- scripts/common.sh@336 -- $ read -ra ver1 00:26:32.512 10:40:07 -- scripts/common.sh@337 -- $ IFS=.-: 00:26:32.512 10:40:07 -- scripts/common.sh@337 -- $ read -ra ver2 00:26:32.512 10:40:07 -- scripts/common.sh@338 -- $ local 'op=<' 00:26:32.512 10:40:07 -- scripts/common.sh@340 -- $ ver1_l=2 00:26:32.512 10:40:07 -- scripts/common.sh@341 -- $ ver2_l=1 00:26:32.512 10:40:07 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:26:32.512 10:40:07 -- scripts/common.sh@344 -- $ case "$op" in 00:26:32.512 10:40:07 -- scripts/common.sh@345 -- $ : 1 00:26:32.512 10:40:07 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:26:32.512 10:40:07 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.512 10:40:07 -- scripts/common.sh@365 -- $ decimal 1 00:26:32.512 10:40:07 -- scripts/common.sh@353 -- $ local d=1 00:26:32.512 10:40:07 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:26:32.512 10:40:07 -- scripts/common.sh@355 -- $ echo 1 00:26:32.512 10:40:07 -- scripts/common.sh@365 -- $ ver1[v]=1 00:26:32.512 10:40:07 -- scripts/common.sh@366 -- $ decimal 2 00:26:32.512 10:40:07 -- scripts/common.sh@353 -- $ local d=2 00:26:32.512 10:40:07 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:26:32.512 10:40:07 -- scripts/common.sh@355 -- $ echo 2 00:26:32.512 10:40:07 -- scripts/common.sh@366 -- $ ver2[v]=2 00:26:32.512 10:40:07 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:26:32.512 10:40:07 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:26:32.512 10:40:07 -- scripts/common.sh@368 -- $ return 0 00:26:32.512 10:40:07 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.512 10:40:07 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:26:32.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.512 --rc genhtml_branch_coverage=1 00:26:32.512 --rc genhtml_function_coverage=1 00:26:32.512 --rc genhtml_legend=1 00:26:32.512 --rc geninfo_all_blocks=1 00:26:32.512 --rc geninfo_unexecuted_blocks=1 00:26:32.512 00:26:32.512 ' 00:26:32.512 10:40:07 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:26:32.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.512 --rc genhtml_branch_coverage=1 00:26:32.512 --rc genhtml_function_coverage=1 00:26:32.512 --rc genhtml_legend=1 00:26:32.512 --rc geninfo_all_blocks=1 00:26:32.512 --rc geninfo_unexecuted_blocks=1 00:26:32.512 00:26:32.512 ' 00:26:32.512 10:40:07 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:26:32.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.512 --rc genhtml_branch_coverage=1 00:26:32.512 --rc genhtml_function_coverage=1 00:26:32.512 --rc genhtml_legend=1 00:26:32.512 --rc geninfo_all_blocks=1 00:26:32.512 --rc geninfo_unexecuted_blocks=1 00:26:32.512 00:26:32.512 ' 00:26:32.512 10:40:07 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:26:32.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.512 --rc genhtml_branch_coverage=1 00:26:32.512 --rc genhtml_function_coverage=1 00:26:32.512 --rc genhtml_legend=1 00:26:32.512 --rc geninfo_all_blocks=1 00:26:32.512 --rc geninfo_unexecuted_blocks=1 00:26:32.512 00:26:32.512 ' 00:26:32.512 10:40:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:32.512 10:40:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:32.512 10:40:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:32.512 10:40:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.512 10:40:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.512 10:40:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.512 10:40:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.512 10:40:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.512 10:40:07 -- paths/export.sh@5 -- $ export PATH 00:26:32.512 10:40:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.512 10:40:07 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:32.512 10:40:07 -- common/autobuild_common.sh@479 -- $ date +%s 00:26:32.512 10:40:07 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733827207.XXXXXX 00:26:32.512 10:40:07 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733827207.IQAe9c 00:26:32.512 10:40:07 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:26:32.512 10:40:07 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:26:32.512 10:40:07 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:26:32.512 10:40:07 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:26:32.512 10:40:07 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:32.512 10:40:07 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:32.512 10:40:07 -- common/autobuild_common.sh@495 -- $ get_config_params 00:26:32.512 10:40:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:26:32.512 10:40:07 -- common/autotest_common.sh@10 -- $ set +x 00:26:32.512 10:40:07 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:26:32.512 10:40:07 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:26:32.512 10:40:07 -- pm/common@17 -- $ local monitor 00:26:32.512 10:40:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:32.512 10:40:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:32.512 10:40:07 -- pm/common@25 -- $ sleep 1 00:26:32.512 10:40:07 -- pm/common@21 -- $ date +%s 00:26:32.512 10:40:07 -- pm/common@21 -- $ date +%s 00:26:32.512 10:40:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733827207 00:26:32.513 10:40:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733827207 00:26:32.513 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733827207_collect-cpu-load.pm.log 00:26:32.513 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733827207_collect-vmstat.pm.log 00:26:33.450 10:40:08 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:26:33.450 10:40:08 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:26:33.450 10:40:08 -- spdk/autopackage.sh@14 -- $ timing_finish 00:26:33.450 10:40:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:33.450 10:40:08 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:33.450 10:40:08 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:33.450 10:40:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:33.450 10:40:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:33.450 10:40:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:33.450 10:40:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:33.450 10:40:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:33.450 10:40:08 -- pm/common@44 -- $ pid=102625 00:26:33.450 10:40:08 -- pm/common@50 -- $ kill -TERM 102625 00:26:33.450 10:40:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:33.450 10:40:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:33.450 10:40:08 -- pm/common@44 -- $ pid=102627 00:26:33.450 10:40:08 -- pm/common@50 -- $ kill -TERM 102627 00:26:33.450 + [[ -n 6009 ]] 00:26:33.450 + sudo kill 6009 00:26:33.459 [Pipeline] } 00:26:33.480 [Pipeline] // timeout 00:26:33.495 [Pipeline] } 00:26:33.509 [Pipeline] // stage 00:26:33.514 [Pipeline] } 00:26:33.528 [Pipeline] // catchError 00:26:33.536 [Pipeline] stage 00:26:33.538 [Pipeline] { (Stop VM) 00:26:33.550 [Pipeline] sh 00:26:33.830 + vagrant halt 00:26:36.364 ==> default: Halting domain... 00:26:42.945 [Pipeline] sh 00:26:43.226 + vagrant destroy -f 00:26:46.513 ==> default: Removing domain... 00:26:46.525 [Pipeline] sh 00:26:46.807 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:46.816 [Pipeline] } 00:26:46.831 [Pipeline] // stage 00:26:46.836 [Pipeline] } 00:26:46.851 [Pipeline] // dir 00:26:46.856 [Pipeline] } 00:26:46.871 [Pipeline] // wrap 00:26:46.877 [Pipeline] } 00:26:46.890 [Pipeline] // catchError 00:26:46.899 [Pipeline] stage 00:26:46.902 [Pipeline] { (Epilogue) 00:26:46.915 [Pipeline] sh 00:26:47.197 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:52.513 [Pipeline] catchError 00:26:52.515 [Pipeline] { 00:26:52.529 [Pipeline] sh 00:26:52.812 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:53.070 Artifacts sizes are good 00:26:53.079 [Pipeline] } 00:26:53.093 [Pipeline] // catchError 00:26:53.104 [Pipeline] archiveArtifacts 00:26:53.111 Archiving artifacts 00:26:53.236 [Pipeline] cleanWs 00:26:53.247 [WS-CLEANUP] Deleting project workspace... 00:26:53.247 [WS-CLEANUP] Deferred wipeout is used... 00:26:53.254 [WS-CLEANUP] done 00:26:53.256 [Pipeline] } 00:26:53.271 [Pipeline] // stage 00:26:53.276 [Pipeline] } 00:26:53.290 [Pipeline] // node 00:26:53.295 [Pipeline] End of Pipeline 00:26:53.343 Finished: SUCCESS